Updated August 18th, 2023 at 11:11 IST

This AI program can steal your password by listening to you type

According to research by a team of computer scientists, the AI model has been trained to identify sounds produced by keystrokes on a 2021 version of a MacBook.

Reported by: Megha Rawat
AI can now steal passwords using a unique method. | Image: Representative/Pixabay) | Image:self
Advertisement

The development of AI tools may have some negative consequences. A study by Cornell University in the United States found that AI tools could be used to steal passwords with near-perfect accuracy. The study found that an AI program activated on a nearby smartphone could accurately reproduce a typed password 95 percent of the time.

According to research by a team of computer scientists from the United Kingdom, the AI model has been trained to identify sounds produced by keystrokes on a 2021 version of a MacBook Pro. During a Zoom video conference, the AI tool exhibited remarkable accuracy in 'listening' to keystrokes picked up by the laptop's microphone.

The AI programme was able to replicate the keystrokes with an astonishing accuracy rate of 93%, according to the researchers, setting a new standard for this type of attack. The researchers also stressed the alarming fact that many users are unaware that malicious individuals may be watching their typing in order to exploit vulnerabilities and acquire unauthorised access to accounts. This type of cyberattack is called an 'acoustic side-channel attack'.

What is an acoustic side-channel attack?

An acoustic side-channel attack is a type of cyberattack that makes use of unintended sound emissions or vibrations produced by a computing device to gather sensitive information. An attack type known as a "side-channel" takes advantage of data that is exposed during the execution of a cryptographic algorithm, including timing, power usage, electromagnetic radiation, and in this case, auditory signals. 

In the context of an acoustic side-channel attack, the attacker uses specialised tools or techniques to capture the acoustic emissions produced by a device during its operation. This includes the sounds made by internal computer components as they process data as well as sounds made by keystrokes on a keyboard or mouse. These emissions may provide valuable details about the device's operations such as the timing and order of keystrokes or other user inputs. 

An attacker may be able to determine sensitive information being entered by a user on the targeted device, such as passwords, PINs, or other confidential data, by analysing the captured acoustic signals. This type of attack is termed hazardous as users do not realise that these acoustic emissions can be exploited to compromise their security. 

The study emphasised the prevalence of keyboard acoustic emissions, which not only provide an accessible avenue for attacks but also cause people to underestimate the danger posed by these emissions and, as a result, fail to take preventative measures. For instance, people frequently cover their screens when entering passwords, but they rarely think to do the same for the sound that their keyboards make.

Moreover, the researchers meticulously pressed each of the laptop's keys 25 times to evaluate the precision of the AI program. The AI software may 'listen' for identifying traits in each key press, such as particular sound wavelengths. The smartphone utilised for testing, an iPhone 13 mini, was positioned 17 centimetres away from the keyboard.

Advertisement

Published August 18th, 2023 at 07:51 IST