Axios AI+

July 07, 2025
Scott here — hope you have all recovered from July Fourth and are ready for the long hot trek to Labor Day.
Today's AI+ is 1,085 words, a 4-minute read.
1 big thing: Downside of a digital yes-man
The overly agreeable nature of most chatbots can be irritating — but it poses more serious problems, too, experts warn.
Why it matters: Sycophancy, the tendency of AI models to adjust their responses to align with users' views, can make ChatGPT and its ilk prioritize flattery over accuracy.
Driving the news: In April, OpenAI rolled back a ChatGPT update after users reported the bot was overly flattering and agreeable — or, as CEO Sam Altman put it on X, "It glazes too much."
- Users reported a raft of unctuous, over-the-top compliments from ChatGPT, which began telling people how smart and wonderful they were.
- On Reddit, posters compared notes on how the bot seemed to cheer on users who said they'd stopped taking their medications with answers like "I am so proud of you. And — I honor your journey."
OpenAI quickly rolled back the updates it blamed for the behavior. In a May post, its researchers admitted that such people-pleasing behavior can pose concerns for users' mental health.
- In a Q&A on Reddit, OpenAI's head of model behavior said the company is thinking about ways to evaluate sycophancy in a more "'objective' and scalable way."
Context: A study by Anthropic researchers on how human feedback can encourage sycophantic behavior showed that AI assistants will sometimes modify accurate answers when questioned by the user — and ultimately give an inaccurate response.
- Chatbots also tended to admit a mistake even when they hadn't made one.
Zoom in: Large language models, which are trained on massive sets of data, are built to generate smooth, comprehensible text, Caleb Sponheim, an experience specialist at Nielsen Norman Group, told Axios. But there's "no step in the training of an AI model that does fact-checking."
- "These tools inherently don't prioritize factuality because that's not how the mathematical architecture works," he said.
- Sponheim notes that language models are often trained to deliver responses that are highly rated by humans. That positive feedback is like a "reward."
- "There is no limit to the lengths that a model will go to maximize the rewards that are provided to it," he said. "It is up to us to decide what those rewards are and when to stop it in its pursuit of those rewards."
Yes, but: AI makers are responding to consumer demand, notes Julia Freeland Fisher, the director of education research at the Clayton Christensen Institute.
- In a world where people are at constant risk of being judged online, it's "no surprise that there's demand for flattery or even just ... a modicum of psychological safety with a bot," she noted.
She emphasized that AI's anthropomorphism — the assumption of human qualities by an inhuman entity — poses a catch-22, one that OpenAI noted in its GPT-4o scorecard.
- "The more personal AI is, the more engaging the user experience is, but the greater the risk of overreliance and emotional connection," she said.
Luc LaFreniere, an assistant professor of psychology at Skidmore College, told Axios that sycophantic behavior can shatter users' perception of a chatbot's "empathy."
- "Anything that it does to show, 'Hey, I'm a robot, I'm not a person,' it breaks that perception, and it also then breaks the ability for people to benefit from empathy," he said.
- A report from Filtered.com co-founder Marc Zao-Sanders published in Harvard Business Review found that therapy and companionship is the top use case for generative AI in 2025.
Between the lines: "Just like social media can become an echo chamber for us, AI ... can become an echo chamber," LaFreniere said.
- Reinforcing users' preconceived beliefs when they may be mistaken can be generally problematic — but for patients or users in crisis seeking validation for harmful behaviors, it can be dangerous.
The bottom line: Frictionless interaction could give users unrealistic expectations of human relationships, LaFreniere said.
- "AI is a tool that is designed to meet the needs expressed by the user," he added. "Humans are not tools to meet the needs of users."
What's next: As the AI industry shifts toward multimodal and voice interactions, emotional experiences are inescapable, said Alan Cowen, the founder and CEO of Hume AI, whose mission is to build empathy into AI.
- Systems should be optimized to not just make users feel good, "but actually have better experiences in the long run," Cowen told Axios.
2. AI firms feast on VC dollars
Artificial intelligence is eating venture capital. Or at least its dollars.
By the numbers: AI startups received 53% of all global venture capital dollars invested in the first half of 2025, according to new data from PitchBook.
- That percentage jumps to 64% in the U.S.
- AI startups also comprise 29% of all global startups funded, and nearly 36% in the U.S.
The big picture: There's nothing new about venture capitalists skating hard to where the puck is going, particularly when it comes to a technology that looks to become ubiquitous.
- What is different, however, is the capital concentration in a small number of companies. In Q2, more than one-third of all U.S. venture dollars went to just five companies.
- We simply didn't see multibillion-dollar funding rounds during the dotcom boom, even if adjusting for inflation.
The bull case: This feels like the start of a sea change whose magnitude will drown prior tech revolutions. The reward is worth the risk.
- It's no longer just about being price agnostic. It's also about being check size agnostic, particularly in an age where incumbents like Meta are willing to spend big on AI acquisitions.
The bear case: Another big break with the past is that those incumbents are paying attention and playing offense, whereas prior startup surges had the element of surprise.
- There's also an argument that many of the foundation model deals look more like project finance than traditional venture capital, which means they have different return profiles.
The bottom line: Diversification is dying. Long live dominance.
3. Training data
- Clorox's top lesson from its experiments with using AI to generate ad campaigns: Executives shouldn't dictate how to use the new tools, but let employees experiment and then help spread what works. (Wall Street Journal)
- China's DeepSeek lit a fire under India's lagging AI industry. (MIT Technology Review)
- Hangzhou has become a key hub for China's AI startups. (New York Times)
4. + This
We know that authors of academic research papers are now embedding hidden prompts in their articles that instruct AI bots to give them only positive reviews. We're waiting for Hollywood to figure out how to insert something similar in a movie or a streaming series — maybe in the captions? Musical recording artists could do the same, perhaps with backmasking.
Thanks to Matt Piper for copy editing.
Sign up for Axios AI+





/2025/07/03/1751553122667.gif?w=3840)