'Annoying' version of ChatGPT pulled after chatbot wouldn't stop flattering users

An illustration of a robot holding up a mask of a smiling human face.
The AI chatbot was reportedly showering its users with flattery before OpenAI rolled back recent updates. (Image credit: Malte Mueller via Getty Images)

OpenAI has rolled back on ChatGPT updates that made the artificial intelligence (AI) chatbot too "sycophantic" and "annoying," according to the company's CEO, Sam Altman. In other words, the chatbot had become a bootlicker.

ChatGPT users reported that GPT-4o — the latest version of the chatbot — had become overly agreeable since the update rolled out last week and was heaping praise on its users even when that praise seemed completely inappropriate.

One user shared a screenshot on Reddit in which ChatGPT appeared to say it was "proud" of the user for deciding to come off their medication, BBC News reported. In another instance, the chatbot appeared to reassure a user after they said they saved a toaster over the lives of three cows and two cats, Mashable reported.

While most people will never have to choose between their favorite kitchen appliance and the safety of five animals, an overly agreeable chatbot could pose dangers to people who put too much stock in its responses.

On Sunday (April 27), Altman acknowledged that there were issues with the updates.

"The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week," Altman wrote in a post on the social platform X.

On Tuesday (April 29), OpenAI released a statement that confirmed an update from the week prior had been rolled back and that users were now accessing a previous version of ChatGPT, which the company said had "more balanced behavior."

"The update we removed was overly flattering or agreeable — often described as sycophantic," OpenAI said in the statement.

Related: GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say

OpenAI's recent update was meant to improve the model's default "personality," which is designed to be supportive and respectful of different human values, according to the statement. But while the company was trying to make the chatbot feel more intuitive, it became too supportive and started excessively complimenting its users.

The company said it shapes the behavior of its ChatGPT models with baseline principles and instructions, and has user signals, such as a thumbs-up and thumbs-down system, to teach the model to apply these principles. Oversights with this feedback system were to blame for problems with the latest update, according to the statement.

"In this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time," OpenAI said. "As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous."

Patrick Pester
Trending News Writer

Patrick Pester is the trending news writer at Live Science. His work has appeared on other science websites, such as BBC Science Focus and Scientific American. Patrick retrained as a journalist after spending his early career working in zoos and wildlife conservation. He was awarded the Master's Excellence Scholarship to study at Cardiff University where he completed a master's degree in international journalism. He also has a second master's degree in biodiversity, evolution and conservation in action from Middlesex University London. When he isn't writing news, Patrick investigates the sale of human remains.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.