AI is advancing too quickly for research to keep up
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
AI is evolving faster than the systems designed to evaluate it, meaning a lot of the scientific research you may read is already out of date by the time it's published.
Why it matters: If AI's going to change the world, those charged with thinking about it most critically will have to learn to keep up — or risk presenting misinformation themselves.
Case in point: A recent study from Oxford University found that AI often gave wrong health advice, mostly due to how users asked questions.
- But as New York Times tech columnist Kevin Roose pointed out, the study was based on users who worked with only three specific models — OpenAI's GPT-4o, Meta's Llama 3 and Cohere's Command R+.
- OpenAI has since upgraded ChatGPT to GPT-5.2. Llama 4 came out in 2024. Command R+ is an under-the-radar model that's far less in the spotlight than Claude or Gemini.
- Similarly, a study led by a Brown University researcher found that using AI for therapy may breach ethical standards, but the study was done by prompting LLMs such as GPT-3, Llama 3.1 and Claude 3 Haiku — many of which are outdated.
What they're saying: "It's always possible that you publish, you get a result and then next week, someone else comes out with a system that outperforms your result or invalidates your result," Mark Finlayson, associate professor of computer science at Florida International University, tells Axios.
- New AI research can have "a very short shelf life," he says.
Zoom out: This also creates a power imbalance where AI companies benefit from academic research — without doing the research themselves.
- "That's not what they're spending their time on," Finlayson says of AI companies and research. "They're spending their time on producing models that respond to observed problems."
This is why peer-review studies are needed, argues Julia Powles, a UCLA law professor and executive director of the UCLA Institute for Technology, Law & Policy.
- "The only checks on AI system development are internal to the firms themselves," she tells Axios. "This has made the industry paradigmatically reckless."
- "It also leads to enormous power asymmetries with those who seek to study, regulate, oversee, and seek redress for their practices."
Reality check: The AI research publication process is slow. Like most academic research, it takes time for study, writing and peer review
- This can lead to publication lag, where study findings are partially outdated when they appear in print, as seen throughout the COVID pandemic.
The other side: AI has faced a lot of scrutiny. But companies seem eager to train their models based on critiques.
- For example, OpenAI has faced several lawsuits alleging ChatGPT contributed to multiple suicides and psychological injuries. But CEO Sam Altman has been transparent about fixing their models based on those criticisms.
- Elon Musk suggested that issues with xAI's Grok chatbot would be addressed after it repeatedly used antisemitic language.
The bottom line: Systems built to evaluate slow-moving science struggle to keep up with AI's breakneck speed.
