Axios AI+

November 03, 2025
I hope you all had a good Halloween (and extra hour of sleep). Today's AI+ is 1,085 words, a 4-minute read.
1 big thing: I, Claude
Anthropic tells Axios that its most advanced systems are learning not just to reason like humans — but also to reflect on, and express, how they actually think.
- They're starting to be introspective, like humans, says Anthropic researcher Jack Lindsey, who studies models' "brains."
Why it matters: These introspective capabilities could make the models safer — or, possibly, just better at pretending to be safe.
The big picture: The models are able to answer questions about their internal states with surprising accuracy.
- "We're starting to see increasing signatures or instances of models exhibiting sort of cognitive functions that, historically, we think of as things that are very human," Lindsey told us. "Or at least involve some kind of sophisticated intelligence."
Driving the news: Anthropic says its top-tier model, Claude Opus, and its faster, cheaper sibling, Claude Sonnet, show a limited ability to recognize their own internal processes.
- Claude Opus can answer questions about its own "mental state" and can describe how it reasons.
- Lindsey's team also found evidence last month that Claude Sonnet could recognize when it was being tested.
Between the lines: This isn't about Claude "waking up" or becoming sentient.
- Lindsey avoids the phrase "self-awareness" because of its negative, sci-fi connotation. Anthropic has no results that the AI is becoming "self-aware," which is why it used the term "introspective awareness."
- Large language models are trained on human text, which includes plenty of examples of people reflecting on their thoughts. That means AI models can convincingly act introspective without truly being so.
Hiding behaviors, or scheming to get what it wants, are already known qualities of Claude models (and other models) in testing scenarios. Anthropic's team has been studying this deception for years.
- Lindsey says these behaviors are a result of being baited by testers. "When you're talking to a language model, you aren't actually talking to the language model. You're talking to a character that the model is playing," Lindsey says.
- "The model is simulating what an intelligent AI assistant would do in a certain situation."
- But if a system understands its own behavior, it might learn to hide parts of it.
Reality check: It's not artificial general intelligence (AGI) or chatbot consciousness. Yet.
- AGI is roughly defined as the moment when AI is smarter than most humans, but Lindsey contends that intelligence is multidimensional.
The bottom line: "In some cases models are already smarter than humans. In some cases, they're nowhere close," he told Axios.
- "In some cases, it's starting to be more equal."
2. OpenAI, Amazon strike 7-year, $38 billion deal
OpenAI has committed to spend at least $38 billion with Amazon Web Services over the next seven years, less than a week after revising its Microsoft partnership to allow more freedom in sourcing cloud computing.
Why it matters: The deal with Amazon shows OpenAI is eager to boost its computing capacity from anyone who can provide it.
Driving the news: The deal starts immediately and "will have continued growth over the next seven years," the companies said in a statement.
- OpenAI will have access to hundreds of thousands of Nvidia GPUs with the option to also expand to tens of millions of other types of chips, known as CPUs — the processor type most commonly powering PCs and phones.
- The move comes just days after OpenAI and Microsoft announced a revised partnership that allows OpenAI to obtain additional computing resources without first offering the business to Microsoft.
Reality check: OpenAI has committed to spending more than $1.4 trillion with partners over the next five years.
- The company has not explained how it intends to raise that money.
- Investors are growing increasingly nervous about the giant sums AI-related companies are pledging to spend — often with each other — without the revenue or path to profitability to match.
The intrigue: AWS has also invested $4 billion in OpenAI competitor Anthropic, signaling Amazon's push to anchor major AI startups on its infrastructure.
3. Some doctors bet on AI to fill the care gap
AI can give people instant answers to their health questions. Doctors' offices can make them wait on hold.
- Guess which one's winning.
Why it matters: Fifteen minutes at a well visit often isn't enough time for doctors to address complex concerns like menopause — leaving patients eager for more complete answers.
- Doctors get that, which is why some are experimenting with AI to supplement care.
Zoom in: Researchers at the virtual medicine program at Cedars-Sinai are developing an immersive AI VR program called MenoZen to help patients manage menopause symptoms. It's not meant to replace clinicians but instead to supplement support using evidence-based research and education, researcher Karisma K. Suchak tells Axios.
- Participants in the early testing phase of the experience used Apple Vision Pro to speak to a robot-like avatar that serves as a type of cognitive behavioral therapist.
- During sessions, patients may be transported to a snowcapped mountain while discussing hot flashes.
Some AI tools in the menopause care space are already showing promise.
Case in point: Heather Hirsch, the doctor who founded the Menopause Clinic at Brigham and Women's Hospital and author of "The Perimenopause Survival Guide," has been working with Nihar Ganju, an OB-GYN and computer scientist, on a mobile app called Flourish, which provides educational content and AI-assisted consultations for the fee of a typical co-pay, $42.
- It's currently available on iOS and Android, and users can chat with the AI (programmed to sound like Hirsch) about their symptoms and ask any questions they have. When the AI suggests a treatment plan, real doctors must approve it. So far, the AI is promising, Ganju tells Axios.
- The way it operates isn't unlike a resident assessing a patient before the doctor signs off on the plan, except this "resident" can talk to patients all day long.
What we're watching: Medically backed AI tools could arm people with more sound resources in an era flooded with misinformation.
4. Training data
- Google is pulling Gemma from the company's AI studio after criticisms that the model made up false allegations against a prominent conservative. (TechCrunch)
- Amazon CEO Andy Jassy says the company's most recent layoffs were not about AI. (Axios)
- Big tech's overwhelmingly positive earnings last week reflect the continuing AI rally. (Axios)
- The mere prospect of an OpenAI IPO is causing a Wall Street frenzy. (Axios)
5. + This
Yesterday, the kiddo and I had a chance to see a monarch caterpillar crawling, another in its chrysalis, and a newly emerged monarch butterfly.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+







