
Image by wayhomestudio, from Freeik
Supporter AI Glitch Exposes The Risks of Replacing Workers With Automation
A support bot for AI startup Cursor made up a login policy, sparking confusion, user backlash, and raising serious concerns over automated service.
In a rush? Here are the quick facts:
- Users canceled subscriptions after misleading AI response.
- Cofounder confirmed it was an AI hallucination.
- AI-powered support systems save labor costs but risk damaging trust.
Anysphere, the AI startup behind the popular coding assistant Cursor, has hit a rough patch after its AI-powered support bot gave out false information, triggering user frustration and subscription cancellations, as first reported by Fortune.
Cursor, which launched in 2023, has seen explosive growth—reaching $100 million in annual revenue and attracting a near $10 billion valuation. But this week, its support system became the center of controversy when users were mysteriously logged out while switching devices.
A Hacker News user shared the strange experience, revealing that when they reached out to customer support, a bot named “Sam” responded with an email saying the logouts were part of a “new login policy.”
There was just one problem: that policy didn’t exist. The explanation was a hallucination—AI-speak for made-up information. No human was involved.
As news spread through the developer community, trust quickly eroded. Cofounder Michael Truell acknowledged the issue in a Reddit post, confirming it was an “incorrect response from a front-line AI support bot.” He also noted the team was investigating a bug causing the logouts, adding, “Apologies about the confusion here.”
But for many users, the damage was done. “Support gave the same canned, likely AI-generated response multiple times,” said Cursor user Melanie Warrick, co-founder of Fight Health Insurance. “I stopped using it—the agent wasn’t working, and chasing a fix was too disruptive.”
Experts say this serves as a red flag for overreliance on automation. “Customer support requires a level of empathy, nuance, and problem-solving that AI alone currently struggles to deliver,” warned Sanketh Balakrishna of Datadog.
Amiran Shachar, CEO of Upwind, said this mirrors past AI blunders, like Air Canada’s chatbot fabricating a refund policy. “AI doesn’t understand your users or how they work,” he explained. “Without the right constraints, it will ‘confidently’ fill in gaps with unsupported information.”
Security researchers are now warning that such incidents could open the door to more serious threats. A newly discovered vulnerability known as MINJA (Memory INJection Attack) demonstrates how AI chatbots with memory can be exploited through regular user interactions, essentially poisoning the AI’s internal knowledge.
MINJA allows malicious users to embed deceptive prompts that persist in the model’s memory, potentially influencing future conversations with other users. The attack bypasses backend access and safety filters, and in testing showed a 95% success rate.
“Any user can easily affect the task execution for any other user. Therefore, we say our attack is a practical threat to LLM agents,” said Zhen Xiang, assistant professor at the University of Georgia.
Yet despite these risks, enterprise trust in AI agents is on the rise. A recent survey of over 1,000 IT leaders found that 84% trust AI agents as much as or more than humans. With 92% expecting measurable business outcomes within 18 months, and 79% prioritizing agent deployment this year, the enterprise push is clear—even as privacy concerns and hallucination risks remain obstacles
While AI agents promise reduced labor costs, a single misstep can harm customer trust. “This is exactly the worst-case scenario,” one expert told Fortune.
The Cursor case is now a cautionary tale for startups: even the smartest bots can cause real damage if left unsupervised.
Leave a Comment
Cancel