Cybersecurity researchers have discovered what they say is the earliest example known to date of a malware with that bakes in Large Language Model (LLM) capabilities.

The malware has been codenamed MalTerminal by SentinelOne SentinelLABS research team. The findings were presented at the LABScon 2025 security conference.

In a report examining the malicious use of LLMs, the cybersecurity company said AI models are being increasingly used by threat actors for operational support, as well as for embedding them into their tools – an emerging category called LLM-embedded malware that's exemplified by the appearance of LAMEHUG (aka PROMPTSTEAL) and PromptLock .

This includes the discovery of a previously reported Windows executable called MalTerminal that uses OpenAI GPT-4 to dynamically

See Full Page