LinkedIn, Facial Recognition, and the Exposure of Being The Next Victim of Hacking

PhenomenalMAG Staff  |  Business Savvy

We may want to think twice about the photos that we place on social media. There are many different ways that Web personas leak things that can be used by malicious hackers.

In an experiment, researchers from an IBM cyber-security division took Thomas Fox-Brewster’s LinkedIn profile image -- with his consent -- for their own “nefarious” purposes. During a video call, they held up a laptop and used it to take a snap of his face. Once the computer’s camera recognized him by comparing his live visage to the LinkedIn image, the researchers were then able to infect his LinkedIn profile picture with ransomware. In this case, it was a mock version of the infamous WannaCry Malware.


Simple, but effective: If hackers wanted to target a specific person, they could simply harvest their images from social media, then infect a computer network and launch an attack when the target’s face was detected by the camera. This can also be done with voice recognition or other aspects of a person’s physical being that can be recorded by a computer.


The facial recognition-based attack was part of an artificially-intelligent malware created by the IBM team dubbed DeepLocker.


“DeepLocker is a new class of highly evasive and highly targeted malware that fundamentally differs from any malware that exists today,” Dr. Marc Ph. Stoecklin, principal research scientist for cognitive cybersecurity intelligence at IBM Research, told Forbes. The malware conceals its intent until the artificial intelligence within identifies the target via indicators like facial and voice recognition or geolocation.


Ultimately, the researchers want DeepLocker to help them understand the future of security and, possibly, cyberwarfare. “Things are going to be AI vs. AI in the future,” Stoecklin said.


Stoecklin and his colleagues Dhilung Kirat and Jiyong Jang have been researching how to combine AI with cybersecurity. Outside of DeepLocker, they’ve been exploring ways in which IBM’s famous Watson AI tech can assist security teams.”


Is there anything us users can truly do to prevent being caught up in an attack? Simple, yet, not so social: “if you don’t use a photo of your face, it can’t correlate you across sites.”


But what will be the fun in that?

Don’t miss any news or stories!

© All Rights Reserved.

Back to Top