Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI's ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users' memories and chat histories without their knowledge.
The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI's GPT-4o and GPT-5 models. OpenAI has since addressed some of them .
These issues expose the AI system to indirect prompt injection attacks , allowing an attacker to manipulate the expected behavior of a large language model (LLM) and trick it into performing unintended or malicious actions, security researchers Moshe Bernstein and Liv Matan said in a report shared with The Hacker News.
The identified shortcomings are liste

The Hacker News

Tech Times
PC World
New York Magazine Intelligencer
AlterNet
People Shopping
The Daily Beast
IndyStarSports
WFMJ-TV
Page Six