Chatbots can chip away at belief in conspiracy theories
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
AI chatbots' persuasive power can be leveraged to help combat beliefs in conspiracy theories — not just fuel them.
Why it matters: Belief in conspiracy theories can have dangerous consequences for health choices, deepen political divides and fracture families and friendships.
- Researchers are trying to develop tools to help pull people out of conspiracy rabbit holes.
What they found: Conversing with a chatbot about a conspiracy theory can reduce a person's belief in that theory by about 20% on average, researchers report in a new study.
- A reduction — though not as much — was seen even in those whose conspiracy beliefs were "deeply entrenched and important to their identities," the researchers wrote in Science.
- "It's not a miracle cure," said Thomas Costello, a cognitive psychologist at American University and co-author of the study, adding that it's positive evidence this type of intervention could work.
- "It's like uncovering the top of the rabbit hole so you can see the light."
How it works: More than 2,100 participants were asked to tell an AI system called DebunkBot — running on GPT-4 — about a conspiracy theory they find credible or compelling and why, and then present evidence they think supports it.
- They were then asked to rate their confidence in the theory on a 100-point scale and indicate how important the theory was to their view of the world.
- Next they engaged in three rounds of conversation with the AI system, which had been instructed to persuade users against the theory they said they believed in.
- The AI was given the user's responses in advance so it knew what the person believed and could then tailor its counterarguments.
- At the end of the interaction, users were asked to again scale their level of belief in the theory. The researchers followed up with about 1,000 participants two months later and found the belief-weakening effect persisted, though more experiments are needed to replicate that durability, Costello said.
What they're saying: While chatbots are known for spouting hallucinations and inaccuracies, the study suggests "GPT can stand up for truth that people don't think that LLMs can stand up for," says Kurt Gray, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill who wasn't involved in the study.
Yes, but: It's unclear how practical the intervention is.
- On the one hand, unlike other prebunking and debunking tools, generative AI is automated and could be scaled up to reach many people.
- But on the other, "it is quite unlikely that all, or even many, entrenched believers will choose to engage with AI chatbots," the study authors write.
- The researchers propose chatbots could be deployed to try to engage with internet users when they search for terms related to conspiracy theories.
- But "the very presence of these chatbots will inevitably become the focus of new conspiracy theories, which will likely scare conspiracy-minded people away," Robbie Sutton, a University of Kent professor of social psychology who studies conspiracy theories, told Axios in an email.
The bottom line: Sophisticated chatbots have a dual potential.
- "For every 'good' effect of interventions like this, we can imagine 'bad' effects," Sutton said.
- "For every champion of democracy and rationality who would use this technology, there is an extremist, a despot or corporation who would love to convince us that legitimate worries and injustices are just conspiracy theories."
