Last week, OpenAI unveiled ChatGPT Atlas, a web browser that promises to revolutionise how we interact with the internet. The company’s CEO, Sam Altman, described it as a “once-a-decade opportunity” to rethink how we browse the web.
The promise is compelling: imagine an artificial intelligence (AI) assistant that follows you across every website, remembers your preferences, summarises articles, and handles tedious tasks such as booking flights or ordering groceries on your behalf.
But beneath the glossy marketing lies a more troubling reality. Atlas is designed to be “agentic”, able to autonomously navigate websites and take actions in your logged-in accounts. This introduces security and privacy vulnerabilities that most users are unprepared to manage.
While OpenAI touts innovation, it’s quietly shifting the burden of safety onto unsuspecting consumers who are being asked to trust an AI with their most sensitive digital decisions.
What makes agent mode different
At the heart of Atlas’s appeal is “agent mode”.
Unlike traditional web browsers where you manually navigate the internet, agent mode allows ChatGPT to operate your browser semi-autonomously. For example, when prompted to “find a cocktail bar near you and book a table”, it will search, evaluate options, and attempt to make a reservation.
The technology works by giving ChatGPT access to your browsing context. It can see every open tab, interact with forms, click buttons and navigate between pages just as you would.
Combined with Atlas’s “browser memories” feature, which logs websites you visit and your activities on them, the AI builds an increasingly detailed understanding of your digital life.
This contextual awareness is what enables agent mode to work. But it’s also what makes it dangerously vulnerable.
A perfect storm of security risks
The risks inherent in this design go beyond conventional browser security concerns.
Consider prompt injection attacks, where malicious websites embed hidden commands that manipulate the AI’s behaviour.
Imagine visiting what appears to be a legitimate shopping site. The page, however, contains invisible instructions directing ChatGPT to scrape personal data from all open tabs, such as an active medical portal or a draft email, and then extract the sensitive details without ever needing to access a password.
Similarly, malicious code on one website could potentially influence the AI’s behaviour across multiple tabs. For example, a script on a shopping site could trick the AI agent into switching to your open banking tab and submitting a transfer form.
Atlas’s autofill capabilities and form interaction features can become attack vectors. This is especially the case when an AI is making split-second decisions about what information to enter and where to submit it.
The personalisation features compound these risks. Atlas’s browser memories create comprehensive profiles of your behavior: websites you visit, what you search for, what you purchase, and content you read.
While OpenAI promises this data won’t train its models by default, Atlas is still storing more highly personal data in one place. This consolidated trove of information represents a honeypot for hackers.
Should OpenAI’s business model evolve, it could also become a gold mine for highly targeted advertising.
OpenAI says it has tried to protect users’ security and has run thousands of hours of focused simulated attacks. It also says it has “added safeguards to address new risks that can come from access to logged-in sites and browsing history while taking actions on your behalf”.
However, the company still acknowledges “agents are susceptible to hidden malicious instructions, [which] could lead to stealing data from sites you’re logged into or taking actions you didn’t intend”.
A downgrade in browser security
This marks a major escalation in browser security risks.
For example, sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation.
But in Atlas, the AI agent isn’t malicious code – it’s a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation.
And while most AI safety concerns have focused on the technology producing inaccurate information, prompt injection is more dangerous. It’s not the AI making a mistake; it’s the AI following a hostile command hidden in the environment.
Atlas is especially vulnerable because it gives human-level control to an intelligence layer that can be manipulated by reading a single malicious line of text on an untrusted site.
Think twice before using
Before agentic browsing becomes mainstream, we need rigorous third-party security audits from independent researchers who can stress-test Atlas’s defenses against these risks. We need clearer regulatory frameworks that define liability when AI agents make mistakes or get manipulated. And we need OpenAI to prove, not simply promise, that its safeguards can withstand determined attackers.
For people who are considering downloading Atlas, the advice is straightforward: extreme caution.
If you do use Atlas, think twice before you enable agent mode on websites where you handle sensitive information. Treat browser memories as a security liability and disable them unless you have a compelling reason to share your complete browsing history with an AI. Use Atlas’s incognito mode as your default, and remember that every convenience feature is simultaneously a potential vulnerability.
The future of AI-powered browsing may indeed be inevitable, but it shouldn’t arrive at the expense of user security. OpenAI’s Atlas asks us to trust that innovation will outpace exploitation. History suggests we shouldn’t be so optimistic.
This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Uri Gal, University of Sydney
Read more:
- Creativity is good for the brain and might even slow down its ageing – new study
- AI is changing who gets hired – what skills will keep you employed?
- Most Australian government agencies aren’t transparent about how they use AI
Uri Gal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


The Conversation
The radio station 99.5 The Apple
WRCB-TV
NBC News
Dakota News Now Sports
AlterNet
NHL Arizona Coyotes
CNN
New York Post