Cato Networks says it has discovered a new attack, dubbed "HashJack," that hides malicious prompts after the "#" in legitimate URLs, tricking AI browser assistants into executing them while dodging traditional network and server-side defenses.
Prompt injection occurs when something causes text that the user didn't write to become commands for an AI bot. Direct prompt injection happens when unwanted text gets entered at the point of prompt input, while indirect injection happens when content, such as a web page or PDF that the bot has been asked to summarize, contains hidden commands that AI then follows as if the user had entered them. AI browsers, a relatively new type of web browser that uses AI to try and guess user intent and take autonomous actions, have so far proven to be particula

The Register

Jackson Citizen Patriot
WAND TV
The Hacker News
CNN Business
KWQC
PC World
Vogue
Fast Company Technology
VARIETY
Santa Maria Times Local
Raw Story