Axios AM

June 09, 2025
☕ Good Monday morning. Smart Brevity™ count: 1,940 words ... 7½ mins. Thanks to Noah Bressner for orchestrating. Copy edited by Bryan McBournie.
🌐 Situational awareness: Russia launched 479 drones in a massive overnight attack on Ukraine. Israel captured the Gaza-bound aid flotilla carrying activist Greta Thunberg. U.S.-China trade negotiators are meeting in London.
1 big thing: The scariest AI reality
The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.
- Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do.
Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown.
- None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it.
Two years ago, Axios managing editor for tech Scott Rosenberg wrote a story, "AI's scariest mystery," saying it's common knowledge among AI developers that they can't always explain or predict their systems' behavior. And that's more true than ever.
- Yet there's no sign that the government or companies or general public will demand any deeper understanding — or scrutiny — of building a technology with capabilities beyond human understanding. They're convinced the race to beat China to the most advanced LLMs warrants the risk of the Great Unknown.
🏛️ The House, despite knowing so little about AI, tucked language into President Trump's "Big, Beautiful Bill" that would prohibit states and localities from any AI regulations for 10 years. The Senate is considering limitations on the provision.
- Neither the AI companies nor Congress understands the power of AI a year from now, much less a decade from now.
🖼️ The big picture: Our purpose with this column isn't to be alarmist or "doomers." It's to clinically explain why the inner workings of superhuman intelligence models are a black box, even to the technology's creators. We'll also show, in their own words, how CEOs and founders of the largest AI companies all agree it's a black box.
- Let's start with a basic overview of how LLMs work, to better explain the Great Unknown:
LLMs — including Open AI's ChatGPT, Anthropic's Claude and Google's Gemini — aren't traditional software systems following clear, human-written instructions, like Microsoft Word. In the case of Word, it does precisely what it's engineered to do.
- Instead, LLMs are massive neural networks — like a brain — that ingest massive amounts of information (much of the internet) to learn to generate answers. The engineers know what they're setting in motion, and what data sources they draw on. But the LLM's size — the sheer inhuman number of variables in each choice of "best next word" it makes — means even the experts can't explain exactly why it chooses to say anything in particular.
We asked ChatGPT to explain this (and a human at OpenAI confirmed its accuracy): "We can observe what an LLM outputs, but the process by which it decides on a response is largely opaque. As OpenAI's researchers bluntly put it, 'we have not yet developed human-understandable explanations for why the model generates particular outputs.'"
- "In fact," ChatGPT continued, "OpenAI admitted that when they tweaked their model architecture in GPT-4, 'more research is needed' to understand why certain versions started hallucinating more than earlier versions — a surprising, unintended behavior even its creators couldn't fully diagnose."
Anthropic — which just released Claude 4, the latest model of its LLM, with great fanfare — admitted it was unsure why Claude, when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can't fully explain the irresponsible action.
- Again, sit with that: The company doesn't know why its machine went rogue and malicious. And, in truth, the creators don't really know how smart or independent the LLMs could grow. Anthropic even said Claude 4 is powerful enough to pose a greater risk of being used to develop nuclear or chemical weapons.
Column continues below.
2. 🤖 Part 2: Black-box lingo

OpenAI's Sam Altman and others toss around the tame word of "interpretability" to describe the challenge. "We certainly have not solved interpretability," Altman told a summit in Geneva last year, Jim and Mike continue.
- Anthropic CEO Dario Amodei, in an essay in April called "The Urgency of Interpretability," warned: "People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology."
- In a statement for this story, Anthropic said: "We have a dedicated research team focused on solving this issue, and they've made significant strides in moving the industry's understanding of the inner workings of AI forward." (Read a paper Anthropic published last year, "Mapping the Mind of a Large Language Model.")
Elon Musk has warned for years that AI presents a civilizational risk. In other words, he literally thinks it could destroy humanity, and has said as much. Yet Musk is pouring billions into his own LLM called Grok.
- "I think AI is a significant existential threat," Musk said in Riyadh, Saudi Arabia, last fall. There's a 10%-20% chance "that it goes bad."
Reality check: Apple published a paper last week, "The Illusion of Thinking," concluding that even the most advanced AI reasoning models don't really "think," and can fail when stress-tested.
But a new report by AI researchers, including former OpenAI employees, called "AI 2027," explains how the Great Unknown could, in theory, turn catastrophic in less than two years. The report is wholly speculative, though built on current data about how fast the models are improving. It's being widely read inside the AI companies.
- It captures the belief — or fear — that LLMs could one day think for themselves and start to act on their own. Our purpose isn't to alarm or sound doomy. Rather, you should know what the people building these models talk about incessantly.
- You can dismiss it as hype or hysteria. But researchers at all these companies worry LLMs, because we don't fully understand them, could outsmart their human creators and go rogue.
The safe-landing theory: Google's Sundar Pichai — and really all of the big AI company CEOs — argue that humans will learn to better understand how these machines work and find clever, if yet unknown ways, to control them and "improve lives." The companies all have big research and safety teams, and a huge incentive to tame the technologies if they want to ever realize their full value.
- After all, no one will trust a machine that makes stuff up or threatens them. But, as of today, they do both — and no one knows why.
3. 🚨 Chaos in LA

Tensions between law enforcement and protesters opposing federal immigration sweeps spiraled in Los Angeles yesterday.
- Protesters blocked off a major freeway and set self-driving cars on fire as law enforcement used tear gas, rubber bullets and flash bangs to control the crowd. (AP)

President Trump has already federalized 2,000 California National Guard soldiers, and is threatening to deploy 500 active-duty Marines.
- The National Guard troops deployed to the center by the federal government "appeared to largely refrain from engaging with the demonstrators," The New York Times reports.
The clashes between police and protesters were centered in several blocks of downtown — a small slice of the city.

California Gov. Gavin Newsom demanded that Trump remove the guard members in a letter yesterday afternoon, calling their deployment "unlawful" and a "serious breach of state sovereignty," Axios' Rebecca Falconer writes.
- Newsom later dared border czar Tom Homan to arrest him in a fiery interview with MSNBC last night — a dramatic moment that underscored the tensions between local officials and the Trump administration.

David Hume Kennerly, legendary presidential photographer (and devoted Axios AM reader), shares this image of California National Guard troops arriving at the Federal Building in LA at 8:15 a.m. PT yesterday.
- He used "my Trusty Canon R5 camera with a 100-500mm lens."
Get the latest ... 16 more photos on one page.
4. 👀 Silicon Valley's not crying for Musk
Few tears will be shed in Silicon Valley or at Big Tech firms over Elon Musk's precipitous fall from White House grace, Axios' Scott Rosenberg writes.
- Why it matters: Musk's brief alliance with President Trump warped the usual dynamics of the relationship between America's most valuable industry and its center of political power.
Between the lines: Musk himself is widely admired in tech's corridors of power for Tesla's and SpaceX's innovations — but also widely disliked for his unfulfillable promises, erratic behavior and social media addiction.
- Now that Musk is suddenly on the outs with Trump, a lot of tech leaders are quietly crossing their fingers that they can get back to dealmaking and policy-setting without worrying about a key competitor whispering in the president's ear.
The big picture: Tech leaders see huge opportunities in Washington and government work right now.
- AI is exploding, defense tech is booming, and crypto firms are champing at the bit.
- Plenty of CEOs resented what they saw as the Biden administration's hostility to deals, dedication to strict regulation and aggressive stance on antitrust.
5. 💼 Women execs losing ground
It's a brutal time for women executives — and others who don't neatly fit the stereotypical ideal of a leader, Axios' Emily Peck writes.
- Why it matters: The zeal for diversity that defined the past decade has faded. Backlash from the White House has made firms even less willing to take risks on so-called "non-traditional" candidates — including women, people of color and LGBTQ+ people.
The big picture: For years, executive recruiters were asked to find diverse slates to fill the top spots inside U.S. companies — moving up the numbers, if only slightly, inside these firms.
- That's not happening anymore, says Lindsay Trout, a talent consultant at executive search firm Egon Zehnder, who finds candidates at the C-suite and board level for large companies.
Get Axios Markets for the full story.
6. 🥊 Trump Mad Libs
This headline from Friday's Financial Times crisply encapsulates the collision of business, tech, media and politics — the steady conflation of traditional power centers that is accelerating in the Trump era.
- The story reports: "The Trump family media company is seeking to launch a bitcoin exchange traded fund [ETF] in its latest push to capitalize on surging enthusiasm for digital currencies." (Gift link)
7. 🎤 Hamilton reunion

The original cast of "Hamilton" — including Lin-Manuel Miranda — marked the musical's 10th anniversary by performing a medley of its biggest songs last night at the 78th annual Tony Awards, at Radio City Music Hall in Manhattan.
- Video on X ... The winners.
8. 🎾 1 for the road: Instant tennis classic

Carlos Alcaraz came from behind to beat top-ranked Jannik Sinner for the French Open title in a 5-hour, 29-minute marathon match yesterday in Paris.
- Why it matters: The match — already being labeled one of the greatest ever played — solidified a new era in men's tennis: Alcaraz, 22, and Sinner, 23, have won the last six major titles, taking three each.
Go deeper ... Highlights.
📬 Thanks for reading! Please invite your friends to join AM.
Sign up for Axios AM





