Understanding the impact of AI on misinformation
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Researchers from Indiana University are leading a federally funded effort to understand the role AI plays in making the messages we receive online more influential.
Why it matters: Experts concerned about the threat of AI-driven misinformation have been issuing warnings about the tech's ability to influence or deceive the public since ChatGPT hit the mainstream in November 2022.
- The issue is more prevalent as the 2024 presidential race is largely being run on the internet and has conspiracy theories as a central narrative, overwhelming susceptible voters with content.
The big picture: Some estimates suggest that AI-generated content could soon account for 99% or more of all information online.
Driving the news: IU is leading a team of experts in areas like informatics, psychology, communications and folklore to study the interplay between AI, social media and online misinformation.
- The work is being funded through a $7.5 million grant from the U.S. Department of Defense.
- It is one of 30 projects supported by the department's Multidisciplinary University Research Initiative, which provides funding for defense-related research projects.
What they're saying: "The deluge of misinformation and radicalizing messages poses significant societal threat," said Yong-Yeol Ahn, a professor in the IU Luddy School of Informatics, Computing and Engineering.
- "Now, with AI, you're introducing the potential ability to mine data about individual people and quickly generate targeted messages that appeal to them — applying big data to individuals — which could cause even greater disruptions than we've already experienced."
Zoom in: Ahn, lead investigator on the project, told Axios the effort will take five years to complete and includes six experts from IU.
- Joining the Hoosiers are a media expert at Boston University, a psychologist at Stanford University and a computational folklorist at the University of California at Berkeley.
Between the lines: Ahn said at the heart of the research is a sociological concept called "resonance," which relates to people's receptiveness to certain messages, and the idea that people's opinions are influenced more strongly by material that resonates with them.
- AI's ability to rapidly generate content means it can amplify the power of messages by tailoring the information to audiences on an individual level.
- It can accomplish this through emotional content or narratives that play on existing beliefs or cognitive biases.
The other side: Ahn said resonance can also bridge gaps between groups, and AI has the power to cut through the noise of misinformation by serving as an impartial fact-checker when wielded responsibly.
- "There is a real possibility that AI can actually help," Ahn told Axios. "When people have very different beliefs and political alignments … we can be quick to misinterpret. But AI can mediate that conversation by toning down or reframing a message."
Zoom out: This effort joins other recent attempts to crack down on AI misinformation as the election heats up.
- Among them are a bipartisan coalition with support from Prince Harry and Meghan Markle's Archewell Foundation to help U.S. voters brace for election deepfakes.
The bottom line: Our relationship with AI is growing increasingly complicated, so we should gather information about it as quickly as it can gather information about us.
Go deeper: Why AI needs a little Hoosier hospitality
