OpenAI's ChatGPT can easily be coaxed into leaking your personal data — with just a single "poisoned" document.

As Wired reports, security researchers revealed at this year's Black Hat hacker conference that highly sensitive information can be stolen from a Google Drive account with an indirect prompt injection attack. In other words, hackers feed a document with hidden, malicious prompts to an AI that controls your data instead of manipulating it directly with a prompt injection, one of the most serious types of security flaws threatening the safety of user-facing AI systems.

ChatGPT's ability to be linked to a Gmail account allows it to rifle through your files, which could easily expose you to simple hacks.

This latest glaring lapse in cybersecurity highlights the tech's enormous sho

See Full Page