Artificial Intelligence (AI) has been seen as a potential solution for automatically detecting and combating malware, and stop cyber attacks before they affect any organization.
However, the same technology can also be weaponized by threat actors to power a new generation of malware that can evade even the best cyber-security defenses and infects a computer network or launch an attack only when the target's face is detected by the camera.
To demonstrate this scenario, security researchers at IBM Research came up with DeepLocker—a new breed of "highly targeted and evasive" attack tool powered by AI," which conceals its malicious intent until it reached a specific victim.
According to the IBM researcher, DeepLocker flies under the radar without being detected and "unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."
Describing it as the "spray and pray" approach of traditional malware, researchers believe that this kind of stealthy AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected.