Scientists Used Redditors as AI Test Subjects—Without Telling Them
What if your mind was changed online—not by a person, but by a machine pretending to be one?
That’s exactly what happened in a now-controversial experiment conducted by researchers at the University of Zurich, The Atlantic reported. Over four months, more than 1,000 AI-generated comments were posted in r/ChangeMyView, a popular Reddit forum where users debate social issues and give out points to posts that successfully shift their opinion. The catch? The Redditors had no idea they were part of an academic study, or that they were engaging with chatbots.
The AI didn’t just argue—it persuaded. When researchers tailored responses to a user’s gender, age, and political leanings (inferred from post history via yet another AI), the bots outperformed most human users in earning persuasion points.
Some bots claimed to be trauma counselors. Others posed as victims of abuse. Their backstories, designed to make arguments more relatable, added to their credibility. One even made a case for 9/11 conspiracy theories. And many Redditors bought it.
Science reported that experts believe the experiment was unethical. Once the moderators found out, outrage followed. Users called the deception “disturbing,” “unethical,” and “violating.”
The researchers refused to apologize or halt publication, citing the need for a “realistic setting” to test persuasive AI. The University of Zurich is now reviewing the study, but it stopped short of condemning the methodology used in the experiment.
AI ethics experts have compared the scandal to Facebook’s emotional contagion experiment—but say this goes deeper. Deceiving close-knit communities, especially on a platform built on trust, feels personal. And with chatbots becoming more persuasive than most people, the implications go far beyond Reddit.
The study suggests that AI can influence human decisions, potentially without users being aware of the impact.