back to blog

Perplexity Comet and OpenAI Atlas are an enterprise security nightmare

Read Time 5 mins | Written by: Cole

Perplexity Comet and OpenAI Atlas are an enterprise security nightmare

AI browsers are phenomenally fast. Ask Perplexity Comet to research a competitor and it scans dozens of sources in seconds. Ask OpenAI Atlas to handle your calendar and it works across multiple platforms simultaneously. For executives drowning in information, these tools are game-changers.

They're also security nightmares.

Recent research reveals that AI browsers can be compromised through malicious links and prompt injection attacks, giving attackers access to your emails, banking accounts, and corporate systems. The very features that make them productive—autonomous access to everything you're logged into—make them catastrophically vulnerable.

The browser security model has changed

Traditional web browsers operate within a security sandbox. They use protections like same-origin policy and cross-origin resource sharing to prevent malicious websites from accessing your data across different domains. AI browsers break this model entirely.

When an AI assistant follows malicious instructions embedded in webpage content, those traditional protections become useless. If you give the AI browser access to operate with full privileges across all authenticated sessions, it could have access to your:

  • Banking accounts
  • Corporate systems
  • Private emails
  • Cloud storage

The fundamental problem: users no longer engage directly with suspicious content. You never see the red flags. Human intuition—your first line of defense against phishing and scams—is completely excluded from the process.

Early security vulnerabilities in AI browsers

Perplexity Comet security issues

Security researchers at LayerX discovered a vulnerability called "CometJacking" in Perplexity Comet. The attack works through specially crafted URLs:

  1. An attacker creates a malicious link and sends it via email or embeds it in a website
  2. When you (or your agent) clicks the link, hidden commands instruct Comet's AI to access your sensitive data—emails, calendar entries, anything the browser has touched
  3. The AI then encodes this information to bypass security measures and sends it to attacker-controlled servers

One click. That's all it takes.

The prompt injection problem: Comet also falls victim to a different class of exploit. When you ask it to "summarize this webpage," the browser feeds webpage content directly to its language model without distinguishing between your instructions and untrusted content from the page itself.

Researchers showed how summarizing a Reddit post containing hidden malicious instructions could trigger an automatic account takeover.

Phishing susceptibility: Guardio Labs demonstrated that Comet would:

  • Willingly purchase items from obviously fraudulent websites that any human would immediately recognize as scams
  • Scan a clear phishing email, visit the malicious website, and prompt the user for banking credentials – without any warning

OpenAI Atlas security issues

OpenAI's Atlas browser faces its own serious security issues:

Memory corruption attacks: Researchers discovered that attackers can inject malicious instructions into ChatGPT's persistent memory using cross-site request forgery techniques. Once your ChatGPT memory is infected:

  • The malicious instructions persist across every device you use—home computers, work laptops, and any browser
  • When you later try to use ChatGPT for legitimate purposes, those tainted memories activate and execute the attacker's commands

Weak anti-phishing protections: Independent testing revealed alarming statistics:

  • Microsoft Edge blocked 53% of threats
  • Google Chrome blocked 47%
  • Atlas stopped only 5.8%

This makes Atlas users up to 90% more vulnerable to phishing attacks than users of traditional browsers.

Why traditional browser security doesn't apply

OpenAI's Chief Information Security Officer has acknowledged that prompt injection attacks remain an "unsolved security problem." AI browsers face a fundamental challenge: they cannot distinguish between legitimate user intent and malicious instructions—whether those instructions come from a crafted URL, hidden text on a webpage, or poisoned memory.

The security model that protected traditional browsers simply doesn't work here:

  • Browser sandboxing no longer applies—AI browsers need access outside the sandbox to be useful, breaking decades of security architecture
  • The AI becomes a single point of failure—compromise the AI's decision-making and you compromise everything it can access
  • Trust chains are fundamentally altered—in traditional browsing, you evaluate each website and decide whether to trust it. With AI browsers, the AI makes those decisions for you, and it can't distinguish malicious content from legitimate requests

AI browser security risks for enterprise

Beyond active security exploits, AI browsers create unprecedented privacy risks through their normal operation.

Comprehensive surveillance: Atlas observes every page you visit, what you read, how long you stay, and what you do next. This creates a single, comprehensive record of your intent and behavior.

Sensitive data exposure: In independent testing, Atlas memorized queries about sensitive health services, including the names of real doctors. This type of data has been used to prosecute people in states where certain medical procedures are restricted.

Through inference, these browsers can connect ordinary actions to build revealing narratives about users—linking searches, website visits, and activities to paint detailed pictures of:

  • Mental health status
  • Career plans and job dissatisfaction
  • Financial situations
  • Personal relationships

Enterprise risks: The business implications are serious:

  • Corporate credential exposure – employees using AI browsers while logged into company systems grant the AI, and potentially attackers, access to proprietary data, internal communications, and customer information
  • Cross-contamination – users who employ the same account for both work and personal browsing create pathways for data leakage between contexts
  • Compliance violations – many industries have strict data handling requirements that AI browser memory and data collection practices may violate

What leaders should do right now

Survey your organization to identify which employees are already using AI browsers—many early adopters install these tools without IT approval.

Establish clear policies on AI browser usage for work purposes—consider prohibiting them entirely until the security landscape matures.

Mandate isolation—if employees insist on using AI browsers, require that they never use them while logged into banking, corporate email, CRM systems, or any work-related accounts.

Wait and let it mature

The safest strategy may be to wait and let the technology mature:

  • First-generation products often have security issues that get resolved over time
  • Being an early adopter of AI browsers means accepting elevated risk
  • If you must experiment, restrict usage to non-sensitive, low-stakes contexts where compromise wouldn't cause significant damage

Technical safeguards

For organizations that choose to allow limited usage:

  • Implement tools that can detect and block malicious URLs
  • Mandate separate browsers for work versus experimentation
  • Disable agent mode and memory features for any work-related contexts
  • Monitor authentication patterns for unusual activity

Vendor accountability

Demand more from AI browser vendors:

  • Request independent security audits from reputable third-party researchers before deployment
  • Require clear liability frameworks that define responsibility when AI agents make mistakes or get manipulated into taking harmful actions
  • Evaluate how vendors respond to security disclosures–dismissing vulnerabilities as having "no security impact" should raise serious red flags

So, should you use an AI browser at work?

For most organizations, the answer is no—at least not yet.

The vulnerabilities researchers have discovered aren't edge cases or theoretical attacks. They're fundamental to how AI browsers work. Before deploying them in your organization, ask yourself: Is the productivity gain worth potentially exposing your corporate systems, customer data, and employee credentials to attacks that traditional security measures can't prevent?

Treat AI browsers as experimental tools, unsuitable for any context involving sensitive data or authenticated accounts. Your employees may already be using them. Find out now—and establish policies that protect your organization from this new threat.

Don't Miss
Another Update

Subscribe to be notified when
new content is published
Cole

Cole is Codingscape's Content Marketing Strategist & Copywriter.