Have you ever seen a dark shape out of the corner of your eye and thought it was a person, only to breathe a sigh of relief when you realize it's a coat rack or another innocuous item in your house? It's a harmless trick of the eye, but what would happen if that trick was played on something like an autonomous car or a drone?
That question isn't hypothetical. Kevin Fu, a professor of engineering and computer science who specializes in finding and exploiting new technologies at Northeastern University, figured out how to make the kind of self-driving cars Elon Musk wants to put on the road hallucinate.
By revealing an entirely new kind of cyberattack, an "acoustic adversarial" form of machine learning that Fu and his team have aptly dubbed Poltergeist attacks, Fu hopes to get ahead of the ways hackers could exploit these technologies—with disastrous consequences.
To read more, click here.