Organizations considering the use of AI-powered browsers such as Comet or Atlas should proceed with caution. According to a recent report from Gartner, these tools introduce significant security risks that most enterprises are not yet prepared to manage.
In the report, Gartner analysts Dennis Xu, Evgeny Mirolyubov and John Watts describe so-called “agentic browsers” as technologies that could fundamentally change how users interact with websites and automate online actions. However, they stress that these benefits come with serious cybersecurity concerns. For the foreseeable future, Gartner advises CISOs to block AI browsers altogether in order to reduce risk exposure.
One of the core issues is how much data these browsers can access and transmit. MJ Kaufmann, author and instructor at O’Reilly Media, points out that AI browser sidebars can unintentionally capture everything visible across open tabs. This may include internal tools, login credentials or confidential documents, which can then be sent to an external AI back end without the user fully realizing it.
AI browsers also differ fundamentally from traditional browsers in how they understand user activity. Alex Lisle, CTO of Reality Defender, explains that while standard browsers isolate websites into separate tabs, AI browsers have visibility across all open tabs and their contents. This broader context allows them to be more helpful, but it also means they collect and process far larger volumes of sensitive information.
Dan Pinto, CEO and co-founder of Fingerprint, adds another layer of concern. Because the AI assistant is embedded directly into the browsing experience, it can interpret web pages and act on instructions that are not visible to users. If those instructions are malicious, the assistant may still follow them. This could result in clicking harmful links, completing forms or sending personal data without the user’s awareness.
Challenging Traditional Browser Security Models
Gartner’s concern that AI browsers may transmit active web content, open tabs and browsing history to cloud services is shared by many security leaders. Chris Anderson, CEO of ByteNova, notes that browsers often contain highly sensitive information at any given moment, from financial systems to patient records. Once such data is leaked, it cannot simply be reset or recovered.
The issue is compounded by the shift from passive assistance to autonomous action. As AI browsers adopt agentic behaviors and protocols such as the Model Context Protocol, they begin to challenge long-standing assumptions about browser security.
Randolph Barr, CISO of Cequence Security, observes that AI-native browsers introduce system-level capabilities that traditional browsers have deliberately avoided for decades. This change undermines established security boundaries that organizations rely on.
Barr also highlights the risks associated with personal device usage. Employees often experiment with new technologies at home before bringing them into the workplace through BYOD policies, browser synchronization or remote work setups. As users grow comfortable with AI browsers in their personal lives, those habits can quickly spill over into enterprise environments.
Another concern is how easily AI browsers can be identified by attackers. According to Barr, these browsers exhibit distinct fingerprints across APIs, extensions, DOM behavior and network patterns. With minimal effort, attackers can detect AI browsers and, using AI-driven classification at scale, target users operating in these higher-risk environments.
In his view, AI browsers are advancing faster than the safeguards designed to protect users and organizations. For them to be viable in regulated or sensitive contexts, transparency around system capabilities, independent audits and granular control over embedded features are essential. Gartner’s warning, he argues, helps expose these gaps and may push the industry toward more secure and transparent designs before widespread enterprise adoption.
The Limits of Assessing AI Back Ends
Gartner suggests that some risks could be mitigated by evaluating the AI services that power these browsers and determining whether their security controls meet organizational standards. In practice, however, many experts see this as unrealistic.
Will Tran, vice president of research at Spin.AI, argues that proprietary AI models function as black boxes. Vendors rarely allow customers to audit training data, internal logic or prompt handling, and in some cases even the vendors themselves may not fully understand their own models’ behavior.
Akhil Verghese, co-founder and CEO of Krazimo, echoes this skepticism. He notes that AI browsers provide little visibility into what happens to data before it reaches the underlying AI provider, and that terms of service can change over time. Expecting individuals or organizations to continuously monitor these details is not practical.
Why Training Alone Falls Short
Even if an organization decides to allow an AI browser, Gartner recommends educating employees that any on-screen information could be sent to the AI back end. Users should avoid keeping highly sensitive data open while using AI features such as summaries or autonomous actions.
Erich Kron, CISO advisor at KnowBe4, agrees that awareness is essential but emphasizes that one-time training is not enough. Employees need regular reminders, otherwise day-to-day work pressures will cause them to forget the risks.
Still, education may not fully prevent data exposure. Chris Hutchins, CEO of Hutchins Data Strategy Consultants, argues that the productivity gains promised by AI automation make it unrealistic to expect employees to consistently change their behavior, especially when they do not perceive the data they handle as particularly sensitive. This creates a shadow IT problem where security teams lack visibility into how data is being used and where it is going.
Lionel Litty, CISO and chief security architect at Menlo Security, concludes that even when organizations trust an AI browser vendor, strict technical controls are non-negotiable. These include limiting accessible sites, enforcing robust data loss prevention policies, scanning downloads and actively defending against browser vulnerabilities. AI browsers can easily be redirected toward malicious destinations, and basic URL filtering is no longer sufficient.
For now, Gartner’s position is clear. Until security controls, transparency and governance catch up with the technology, AI browsers remain a risk most enterprises should avoid.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.