Axios AI+

July 02, 2025
Megan and Scott back again. A few weeks ago in San Francisco, Megan saw someone wearing a T-shirt that said RAG Against the Machine, and she wants to know if you know where she can buy one.
Today's AI+ is 1,175 words, a 4.5-minute read.
1 big thing: AI's great brain-rot experiment
Generative AI critics and advocates are both racing to gather evidence that the new technology stunts (or boosts) human thinking powers — but the data simply isn't there yet.
Why it matters: For every utopian who predicts a golden era of AI-powered learning, there's a skeptic who's convinced AI will usher in a new dark age.
Driving the news: A study titled "Your Brain on ChatGPT" out of MIT last month raised hopes that we might be able to stop guessing which side of this debate is right.
- The study aimed to measure the "cognitive cost" of using genAI by looking at three groups tasked with writing brief essays — either on their own, using Google search or using ChatGPT.
- It found, very roughly speaking, that the more help subjects had with their writing, the less brain activity, or "neural connectivity," they experienced as they worked.
Yes, but: This is a preprint study, meaning it hasn't been peer-reviewed.
- It has faced criticism for its design, small size, and its reliance on electroencephalogram (EEG) analysis. And its conclusions are laced with cautions and caveats.
- On their own website, the MIT authors beg journalists not to say that their study demonstrates AI is "making us dumber": "Please do not use words like 'stupid', 'dumb', 'brain rot', 'harm', 'damage'. ... It does a huge disservice to this work, as we did not use this vocabulary in the paper."
Between the lines: Students who learn to write well typically also learn to think more sharply. So it seems like common sense to assume that letting students outsource their writing to a chatbot will dull their minds.
- Sometimes good research will confirm this sort of assumption! But sometimes we get surprised.
- Other recent studies have taken narrow or inconclusive stabs at teasing out other dimensions of the "AI rots our brains" thesis — like whether using AI leads to cultural homogeneity, or how AI-assisted learning compares with human teaching.
- Earlier this year, a University of Pennsylvania/Wharton School study found that people researching a topic by asking an AI chatbot "tend to develop shallower knowledge than when they learn through standard web search."
The big picture: As AI is rushed into service across society, the world is hungry for scientists to explain how a tool that transforms learning and creation will affect the human brain.
- High-speed change makes us crave high-speed answers. But good research takes time — and costs money.
Generative AI is simply too new for us to have any sort of useful or trustworthy scientific data on its impact on cognition, learning, memory, problem-solving or creativity. (Forget "intelligence," which lacks any scientific clarity.)
- Society is nevertheless charging ahead with a vast uncontrolled experiment on human subjects — as we have almost always done with previous new waves of technology, from railroads and automobiles to the internet and social media.
Our thought bubble: As tantalizing but risky new tools have come into view, our species has always chosen the "f--k around and find out" door.
- Since even fears that AI might destroy humanity haven't been enough to slow down its research and deployment, it seems absurd to think we would tap the brakes just to curtail cognitive debt.
Flashback: Readers with still-functional memories may recall the furor around an Atlantic cover story by Nicholas Carr from 2008 titled "Is Google Making Us Stupid?"
- Back then, the fear was that overreliance on screens and search engines to provide us with quick answers might stunt our ability to acquire and retain knowledge.
- But now, in the ChatGPT era, reliance on Google search is being framed by studies like MIT's and Wharton's as a superior alternative to AI's convenient — and sometimes made-up — answers.
2. AI is becoming the new HR, survey finds
Managers are trusting AI to help make high-stakes decisions about firing, promoting, and giving their direct reports a raise, according to a new study from Resume Builder.
Why it matters: AI-based decision-making in HR could open companies up to discrimination and other types of lawsuits, experts tell Axios.
The big picture: Employers are increasingly pushing workers to incorporate genAI into their workflows, and gaining AI skills has been linked to better pay and increased job choices.
- But genAI training and policies at work are still rare, and the tools are changing so fast that it's hard to keep up.
- Using AI to assess people's careers is risky, especially when the tools are prone to hallucinations and poorly understood.
What they did: The study was conducted online late last month with 1,342 U.S. full-time manager-level employees responding.
What they found: 65% of managers say they use AI at work, and 94% of those managers say they look to the tools "to make decisions about the people who report to them," per the report.
- Over half of those managers said they used AI tools to assess whether a direct report should be promoted, given a raise, laid off or fired.
- A little over half of the managers using AI in personnel matters said they used ChatGPT. Others used Microsoft's Copilot, Google's Gemini or different AI tools.
- A majority of these managers said they were confident that AI was "fair and unbiased," and a surprising number of managers (20%) said they let AI make decisions without human input.
- Only one-third of the managers who are using AI for these decisions say that they've received formal training on what the tools can and cannot do.
Managers are looking for new ways to implement AI, probably under pressure from their organizations, Stacie Haller, chief career adviser at Resume Builder, told Axios.
- "Everybody's sort of trying things out. But to me, it raises a huge red flag when you're talking about people's careers," Haller said.
- "If somebody's making a decision to fire you based on AI, I'm imagining there could be lawsuits. I mean, people who felt they were fired unfairly [sued] before AI."
- "I think they're ahead of their skis on this," she added.
Yes, but: It's not clear from the data exactly how managers are using AI to automate managing.
- They could be using it to organize data for performance reviews. Or they could be asking ChatGPT, "Who should I lay off next?"
Zoom in: AI can help synthesize employee feedback or highlight patterns across team assessments, Lynda Gratton, professor of management practice at London Business School, told Axios via email.
- But there could be issues with the quality of the data going into the model, she says.
- And even if it is accurate, Gratton said, "it replicates any bias already in the system."
3. Training data
- Anthropic, fueled by the coding prowess of its Claude model, recently hit a $4 billion annual revenue rate, per sources. (The Information)
- Grammarly is acquiring Superhuman, the once-trendy email app, as part of its push to build an AI-powered productivity platform. (Reuters)
- Cloudflare launched an effort to allow publishers to charge AI bots for crawling their sites for training data. (TechCrunch)
- Elon Musk's X will soon start using AI (with human review) to write Community Notes that annotate users' posts. (Bloomberg)
4. + This
Out: Replacing your therapist with ChatGPT.
In: Replacing your psychedelic guide with ChatGPT.
Thanks to Matt Piper for copy editing this newsletter.
Sign up for Axios AI+




