AI Support Bot Sparks Outrage After Fabricating Company Policy

A recent incident involving the AI-powered code editor Cursor has reignited concerns about the reliability of artificial intelligence in customer support roles. On Monday, a developer noticed that switching between devices while using Cursor would unexpectedly log them out—a frustrating issue for programmers who rely on seamless multi-device workflows.

Seeking answers, the user contacted support and received a response from an agent named “Sam,” who explained that the behavior was due to a new policy limiting one device per subscription, citing security reasons. The explanation sounded legitimate—except it wasn’t true. “Sam” was not a human, but an AI support bot, and no such policy existed.

The fabricated response triggered a wave of user backlash. Discussions erupted on Hacker News and Reddit, with many developers expressing outrage over what they believed was a deliberate restriction on functionality. Some even announced subscription cancellations in protest.

The Incident Unfolds

The controversy began when a Reddit user, posting under the name BrokenToasterOven, highlighted the issue with multi-device session logouts, calling it a “significant UX regression.” The user’s subsequent support inquiry was answered with the now-infamous AI-generated policy.

Because the message was phrased with authority and wasn’t marked as AI-generated, users assumed it reflected an actual policy change. Outrage quickly spread through developer communities, with multiple users terminating their subscriptions. “Multi-device workflows are table stakes for devs,” one commenter noted. Others echoed similar sentiments: “I literally just cancelled my sub… We’re purging it completely.”

Three hours later, a Cursor representative stepped in to clarify: “Hey! We have no such policy,” they wrote in a Reddit thread. “Unfortunately, this is an incorrect response from a front-line AI support bot.”

A Familiar AI Failure

This isn’t the first time an AI “hallucination”—a term for when language models confidently generate false information—has caused real-world damage. In a widely publicized incident in early 2024, Air Canada was forced to honor a refund fabricated by its chatbot after a tribunal ruled the airline was accountable for the misinformation.

Unlike Air Canada, Cursor acted quickly to take responsibility. Co-creator Michael Truell issued an apology on Hacker News, explaining that the logout issue stemmed from a backend change aimed at improving session security. The affected user was refunded, and measures were put in place to ensure AI-generated responses are now explicitly labeled.

“We use AI-assisted responses as the first filter for email support,” Truell noted, adding that transparency is now a priority.

Trust, Transparency, and AI

Despite the fix, the incident raised important questions about trust and disclosure. Many users felt misled by the bot’s human-sounding name and unlabeled status. “LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive,” one user wrote.

For Cursor, a company promoting AI-powered productivity tools for developers, the irony was not lost on the community. “There is a certain amount of irony that people try really hard to say hallucinations are not a big problem anymore,” one Hacker News commenter wrote, “and then a company that would benefit from that narrative gets directly hurt by it.”

The episode underscores a growing need for businesses to implement stronger safeguards, clearer labeling, and human oversight when deploying AI in customer-facing roles—especially when trust is on the line.

Source

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together