OpenAI's new browser sparks privacy, security concerns
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
OpenAI's new browser, Atlas, is triggering fresh privacy and security alarms — and no one's quite sure how to navigate them.
Why it matters: Browsers are the gateway to the internet, and they're known to gobble up some of users' most sensitive information, like their passwords and credit card information.
Driving the news: OpenAI released Atlas, its highly anticipated ChatGPT browser, to MacOS last week.
- Immediately, privacy hawks started raising concerns about the amount of data the browser collected about users, which far surpasses any other browser on the market.
- Security researchers also flagged concerns with how the browser defends against prompt injections, where attackers hide malicious commands in websites and emails to trick the AI into violating its own rules.
The big picture: OpenAI joins a growing list of companies racing to embed AI into browsers, including Microsoft, Google and Perplexity.
- But Atlas goes further: The ChatGPT agent can autonomously complete tasks on websites at a user's request and search queries go through ChatGPT rather than Google.
Between the lines: That autonomy requires Atlas to gather and remember far more about users than traditional browsers.
- Unlike traditional browsers, Atlas also builds "memories" from those searches that could help the browser deduce if someone is planning a trip, needs to reorder house supplies that week or should look up recipes at a specific time.
What they're saying: "The browser wars aren't about tabs and search anymore," Steve Wilson, founder and co-chair of the OWASP Gen AI Security Project and chief AI officer at cybersecurity company Exabeam, told Axios.
- "They're about whether we can keep our new digital coworkers from going rogue."
Zoom in: The list of novel security and privacy threats is growing as experts dig into Atlas' capabilities.
- Lena Cohen, a staff technologist at the Electronic Frontier Foundation, told the Washington Post that in her testing, Atlas memorized queries about "sexual and reproductive health services via Planned Parenthood Direct" — and even the name of a real doctor. Such searches have been used to prosecute people in states where abortion access is restricted.
- OpenAI says it has improved its systems and that Atlas isn't intended to remember details about a user's medical care.
- In agent mode, Atlas could be tricked into booking a hotel room, deleting files or sending messages to someone in a user's contacts, if a malicious website embedded hidden prompts into its design.
- Researchers at SquareX said Saturday that they were able to trick Atlas into visiting a malicious site disguised as the Binance cryptocurrency exchange login page.
Reality check: OpenAI says Atlas is not supposed to retain sensitive information such as government IDs, banking details, passwords, addresses, medical records, or financial data.
- Users can also tell Atlas not to remember certain websites and manually delete memories from its archive.
- OpenAI says it has controls in place to prevent agents from running code, downloading files or using autofill data to complete tasks. Some sensitive tasks will also require users to watch the agents' actions.
- But some security experts say it's too early to trust any AI browsers and the risks outweigh what they can currently accomplish.
The intrigue: OpenAI CISO Dane Stuckey said Wednesday in a lengthy social media post that his team has conducted red-teaming exercises, used novel model training tactics to incentivize ChatGPT to ignore malicious instructions, implemented unique guardrails and safety measures, and added new features to stop prompt injection attacks.
- But Stuckey also admitted that prompt injection attacks remain largely an "unsolved security problem" across all AI platforms, and adversaries are likely going to spend "significant time and resources" to fool ChatGPT.
- OpenAI published tips for staying ahead of prompt injection attacks on Instagram over the weekend.
- Researchers at Brave (who also have a browser) published a report Tuesday detailing how AI browsers, including Perplexity's Comet browser, are also susceptible to prompt injections.
What to watch: Law enforcement is already demanding ChatGPT user data. A first-of-its-kind warrant obtained by Forbes last week shows that investigators are requesting users' ChatGPT histories as part of ongoing cases.
Editor's note: This story was corrected to reflect that SquareX said Saturday (not Thursday) that they were able to trick Atlas.
