Researchers Reveal AI Can See Your Screen Through HDMI Cable Leaks

Image by Tima Miroshnichenko, from Pexels

Researchers Reveal AI Can See Your Screen Through HDMI Cable Leaks

Reading time: 2 min

  • Kiara Fabbri

    Written by: Kiara Fabbri Multimedia Journalist

  • Justyn Newman

    Fact-Checked by Justyn Newman Head Content Manager

A new study reveals that hackers could potentially intercept electromagnetic radiation from HDMI cables to decode screen content using artificial intelligence. One of the researchers points out in an interview with New Scientist that this form of eavesdropping is primarily a threat to high-security environments rather than ordinary users.

The connection between computers and screens was once entirely analog, but today it is mostly digital, transmitting data in binary through high-definition multimedia interface (HDMI) cables. Whenever a signal travels through a wire, some electromagnetic radiation leaks out, and with analog signals, hackers could relatively easily intercept this leak, the study explains.

Attacks exploiting this phenomenon are known as TEMPEST in the research report. Although digital signals are more complex and carry more data, making them harder to decode, they can still be intercepted.

Federico Larroca and his team at the Universidad de la República Uruguay pinpointed this vulnerability by developing an AI model capable of reconstructing these intercepted digital signals from a few meters away.

They evaluated the model’s performance by comparing the text captured from the reconstructed image to the original screen image. The AI achieved an error rate of around 30 percent, meaning that most of the text could be accurately read despite some characters being misinterpreted.

Hackers could use this technique to spy on sensitive information like passwords and bank details displayed on a screen. They can do this by intercepting signals with remote antennas or hidden devices.

Larroca told New Scientist, “Governments are worried about this, [but] I wouldn’t say that the normal user should be too concerned,[…] But if you really care about your security, whatever your reasons are, this could be a problem.”

To mitigate these risks, the researchers propose two countermeasures that can be implemented by modifying the displayed image in a way that is almost imperceptible to the user but disrupts the AI’s ability to decode the intercepted signals. One method involves adding low-level noise to the image, which acts as an adversarial attack on the neural network, making the resulting text largely illegible. Another approach is to use a color gradient on the image’s background, such as a white-to-black ramp, which significantly alters the intercepted signal.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...