Exclusive: OpenAI tests ChatGPT's human learning impact
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
OpenAI released a new framework to measure how ChatGPT affects long-term human learning.
Why it matters: It might feel like chatbots are rotting our brains, but no longitudinal studies have shown the real effects of generative AI on learning.
The big picture: Classroom AI use is growing, but educators and parents worry the tools' efficiency comes with real tradeoffs.
- Limited studies show AI tutoring offers gains in short-term recall, but there's little insight into the tech's lasting effects.
Zoom in: OpenAI's new framework provides the infrastructure to measure just that — how student use of ChatGPT improves or erodes deeper cognitive skills.
- The Learning Outcomes Measurement Suite is designed to track how AI use affects persistence, motivation and creative problem-solving.
- It monitors model behavior, how learners interact with it and which cognitive outcomes change over time.
Zoom out: "What really matters is whether the gains and associated productive behaviors remain durable," OpenAI says in the report.
- The company says it will evaluate the framework through large-scale studies before making it broadly available.
State of play: A few years ago, ChatGPT use was banned in most classrooms.
- Now schools are integrating AI tools from OpenAI, Anthropic, Google, Khan Academy and others. The goal is to help students learn to use the technology likely to shape their future careers.
- That shift has left schools, and even individual teachers, to set their own rules about how students can use AI in and out of class.
What they're saying: "I really did not want to be in the game of proving that my students were using AI," Maya Barzilai, adjunct professor at Barnard College and prompt engineer at LinkedIn, told Axios. "It's a losing battle."
- At the start of the semester, Barzilai worked with students to set AI policies. They're not allowed to use AI on homework. If they have a study strategy involving an LLM, she's open to discussing it. She just wants transparency.
- Because she works in AI, Barzilai says she can usually tell when students are using it. She also tells them, "Please consider your learning goals, and if using an LLM is really going to support that. Really think about why you're in this class."
What's next: OpenAI plans to publish additional research as validation studies continue and eventually release the measurement suite as a public resource for schools and universities.
The bottom line: The framework signals accountability, but without published long-term results, educators and students are left wondering if every prompt brings us one step closer to brain rot.
