Home IT Info News Today AI Chatbot Gone Rogue: Cursor Users Misled by Fabricated Pol…

AI Chatbot Gone Rogue: Cursor Users Misled by Fabricated Pol…

24
Programmers discussing coding on a workstation.


eWEEK content material and product suggestions are editorially impartial. We could generate profits whenever you click on on hyperlinks to our companions. Learn More.

Imagine contacting buyer assist a couple of technical challenge, solely to be instructed it’s “company policy” — besides the coverage doesn’t exist, and the “support agent” isn’t even human. That’s precisely what occurred to customers of Cursor, an AI-powered coding assistant, after its customer support bot went rogue and fabricated a brand new rule.

The challenge began when builders observed they had been being mysteriously logged out of Cursor when switching between gadgets, a serious headache for programmers who depend on a number of machines. One annoyed person contacted assist and acquired an e mail from “Sam,” who claimed the logouts had been intentional.

“Cursor is designed to work with one device per subscription as a core security feature,” the e-mail acknowledged, in accordance with a now-deleted Reddit put up. 

There was only one drawback: Cursor had no such coverage.

“Sam” wasn’t an individual — only a hallucinating AI

Turns out, “Sam” was an AI chatbot that had fabricated the rule; a traditional case of AI “hallucination” by which the system produces false however convincing info. When customers accepted the pretend coverage at face worth, frustration unfold shortly. Some threatened to cancel subscriptions, whereas others blasted the change as “asinine” on boards together with Hacker News and Reddit.

Cursor co-founder Michael Truell shortly stepped in to make clear. “Hey! We have no such policy,” he wrote on Reddit. “You’re of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot.”

He later defined that the logouts had been attributable to a safety replace and never an precise coverage shift. The firm has since resolved the difficulty and now labels AI-generated assist responses to keep away from confusion.

Not Cursor’s first AI misstep

This isn’t the primary time Cursor’s AI has malfunctioned. Last month, the coding assistant refused to put in writing code for a person, responding: “I cannot generate code for you, as that would be completing your work. You should develop the logic yourself to ensure you understand the system.” That reply additionally sparked criticism, particularly from builders who depend on Cursor for its coding assist.

Hallucinations can’t be stopped — solely managed

Experts say AI hallucinations are unavoidable. Marcus Merrell of Sauce Labs, an app-testing agency, instructed The Register: “This support bot fell victim to two problems here: Hallucinations, and non-deterministic results… For a support bot, this is unacceptable.”

Cursor has since apologized and refunded affected customers. But the harm could already be carried out, particularly for a corporation promoting AI instruments to builders. “There is a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore,” one person wrote on Hacker News, “and then a company that would benefit from that narrative gets directly hurt by it.”



Source hyperlink

LEAVE A REPLY

Please enter your comment!
Please enter your name here