AI researchers ran a secret experiment on Reddit users to see if they could change their minds — and the results are creepy
University of Zurich researchers secretly unleashed an army of manipulative chatbots on the r/changemyview subreddit — and they were more persuasive than humans at getting people to change their minds.
Get the world’s most fascinating discoveries delivered straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Delivered Daily
Daily Newsletter
Sign up for the latest discoveries, groundbreaking research and fascinating breakthroughs that impact you and the wider world direct to your inbox.
Once a week
Life's Little Mysteries
Feed your curiosity with an exclusive mystery every week, solved with science and delivered direct to your inbox before it's seen anywhere else.
Once a week
How It Works
Sign up to our free science & technology newsletter for your weekly fix of fascinating articles, quick quizzes, amazing images, and more
Delivered daily
Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
Once a month
Watch This Space
Sign up to our monthly entertainment newsletter to keep up with all our coverage of the latest sci-fi and space movies, tv shows, games and books.
Once a week
Night Sky This Week
Discover this week's must-see night sky events, moon phases, and stunning astrophotos. Sign up for our skywatching newsletter and explore the universe with us!
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Reddit is threatening to sue a group of researchers who used artificial intelligence (AI) chatbots to secretly experiment on its users.
Scientists from the University of Zurich set loose an army of AI bots on the popular Reddit forum r/changemyview — where nearly 4 million users congregate to debate contentious topics — to investigate whether the tech could be used to influence public opinion.
To achieve these goals, the bots left more than 1,700 comments across the subreddit, using a variety of assumed guises including a male rape victim downplaying the trauma of his assault; a domestic violence counselor claiming that the most vulnerable women are those "sheltered by overprotective parents"; and a black man opposed to the Black Lives Matter movement. These bots worked alongside another that scoured user profiles to tailor their responses for maximum persuasiveness.
The Zurich researchers then revealed the experiment to moderators of the forum "as part of a disclosure step in the study," alongside a link to a first draft of its results.
"The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users," the moderators of the subreddit wrote in a post notifying users. "We think this was wrong. We do not think that 'it has not been done before' is an excuse to do an experiment like this."
The draft's findings, which measured the bots' success rate via a site function that enables users to give awards to comments that change their minds, suggest that the AI responses were between three and six times more persuasive than those made by humans.
Related: Using AI reduces your critical thinking skills, Microsoft study warns
Get the world’s most fascinating discoveries delivered straight to your inbox.
And the authors, who (going against standard academic procedure) left their names undisclosed in the draft, noted that throughout the trial unwitting users "never raised concerns that AI might have generated the comments posted by our accounts."
The post was met with ire by users and by Ben Lee, Reddit's chief legal officer, who in a comment below the post using the username traceroo announced that the website would be pursuing formal legal action against the University of Zurich.
"What this University of Zurich team did is deeply wrong on both a moral and legal level," Lee wrote. "It violates academic research and human rights norms, and is prohibited by Reddit's user agreement and rules, in addition to the subreddit rules."
In response, the University of Zurich told 404 Media that the researchers would not publish the results of the study and that in future its ethics committee would adopt a stricter review process for them, in particular coordinating with online communities before they become the unknowing subjects of a mass experiment.
Whatever legal wranglings follow, experiments such as this highlight the growing ability of chatbots to infiltrate online discourse. In March, scientists revealed that OpenAI's GPT-4.5 Large Language Model was already capable of passing the Turing test, successfully fooling trial participants into thinking they were talking with another human 73% of the time.
It also lends some credence to the notion that, if left unchecked, AI chatbots have the potential to displace humans in producing the majority of the internet's content. Called the "dead internet" theory, this idea is just a conspiracy theory — at least for now.

Ben Turner is a U.K. based writer and editor at Live Science. He covers physics and astronomy, tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
