A recent research paper from Durham University in the UK revealed a powerful AI-driven attack that can decipher keyboard inputs solely based on subtle acoustic cues from keystrokes.

Published on Arxiv on Aug. 3, the paper “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards” demonstrates how deep learning techniques can launch remarkably accurate acoustic side-channel attacks, far surpassing the capabilities of traditional methods.

AI attack vector methodology

The researchers developed a deep neural network model utilizing Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) architectures. When tested in controlled environments on a MacBook Pro laptop, this model achieved 95% accuracy in identifying keystrokes from audio recorded via a smartphone.

Remarkably, even with the noise and compression introduced by VoIP applications like Zoom, the model maintained 93% accuracy – the highest reported for this medium. This contrasts sharply with previous acoustic attack methods, which have struggled to exceed 60% accuracy under ideal conditions.

The study leveraged an extensive dataset of over 300,000 keystroke samples captured across various mechanical and chiclet-style keyboards. The model demonstrated versatility across keyboard types, although performance could vary based on specific keyboard make and model.

According to the researchers, these results prove the practical feasibility of acoustic side-channel attacks using only off-the-shelf equipment and algorithms. The ease of implementing such attacks raises concerns for industries like finance and cryptocurrency, where password security is critical.

How to protect against AI-driven acoustic attacks

While deep learning enables more powerful attacks, the study explores mitigation techniques like two-factor authentication, adding fake keystroke sounds during VoIP calls, and encouraging behavior changes like touch typing.

The researchers suggest the following potential safeguards users can employ to thwart these acoustic attacks:

Adopt two-factor or multi-factor authentication on sensitive accounts. This ensures attackers need more than just a deciphered password to gain access.
Use randomized passwords with multiple cases, numbers, and symbols. This increases the complexity and makes passwords harder to decode through audio alone.
Add fake keystroke sounds when using VoIP applications. This can confuse acoustic models and diminish attack accuracy.
Toggle microphone settings during sensitive sessions. Muting or enabling noise suppression features on devices can obstruct clear audio capture.
Utilize speech-to-text applications. Typing on a keyboard inevitably produces acoustic emanations. Using voice commands can avoid this vulnerability.
Be aware of your surroundings when typing confidential information. Public areas with many potential microphones nearby are risky environments.
Request IT departments deploy keystroke protection measures. Organizations should explore software safeguards like audio masking techniques.

This pioneering research spotlights acoustic emanations as a ripe and underestimated attack surface. At the same time, it lays the groundwork for fostering greater awareness and developing robust countermeasures. Continued innovation on both sides of the security divide will be crucial.

The post Protect against new AI attack vector using keyboard sounds to guess passwords over Zoom appeared first on CryptoSlate.