Google security researchers have identified what they say is the first known case of hackers using AI-powered malware in a real-world cyberattack, according to findings published Wednesday.

Why it matters: The discovery suggests adversarial hackers are moving closer to operationalizing generative AI to supercharge their attacks.

Driving the news: Researchers in Google's Threat Intelligence Group have discovered two new malware strains — PromptFlux and PromptSteal — that use large language models to change their behavior mid-attack. • Both malware strains can "dynamically generate malicious scripts, obfuscate their own code to evade detection and leverage AI models to create malicious functions on demand," according to the report.

Zoom in: Google's team found PromptFlux while scanning

See Full Page