Axios AI+

April 30, 2026
Ina here, feeling like a real builder because I was taking a Codex for Journalists class yesterday and I got an error message that I was over my rate limit. Today's AI+ is 1,144 words, a 4.5-minute read.
1 big thing: Musk casts himself as AI's good guy
Elon Musk portrayed himself in court this week as a leading advocate for AI safety — in contrast to what he described as the profit-consumed OpenAI that he's suing.
Why it matters: Musk's self-portrait as a guardian of AI safety clashed with OpenAI's counterargument: that Musk was fine with a for-profit OpenAI when he thought he could control it.
- How the debate over Musk's motivations is resolved could be key to the outcome of the lawsuit the richest man in the world is waging against OpenAI.
The big picture: Under questioning from his own lawyer, Steven Molo, Musk argued that the only way to keep AI from "killing us all" was to keep it out of the hands of anyone trying to make money on it.
- He later acknowledged that his own AI company, xAI, is a for-profit.
- Musk was able to avoid elaborating since SpaceX recently acquired xAI and the rocket company is in an SEC quiet period ahead of a planned public offering.
Musk began by outlining his views about the risks of AI, repeating an oft-told story about how OpenAI wouldn't exist if Google co-founder Larry Page hadn't called Musk a "speciesist" — meaning that Musk cares more about the human species than a potentially sentient AI.
- He said he talked to "anyone and everyone" about AI safety. "It's a buzzkill," he remembers his brother telling him.
- The path to safety, he said, is for the people building artificial general intelligence (AGI) to be "unencumbered by having to create financial returns."
The other side: OpenAI lead counsel William Savitt drew a different picture in his cross-examination of Musk.
- Instead of attacking Musk's concerns about the dangers of AGI, Savitt made the case that Musk was at least as concerned — if not more concerned — with profiting from AGI than the team at OpenAI.
- Through hours of questioning, Savitt implied that Musk's safety concerns seemed to sharpen whenever someone else had the wheel.
- Savitt also challenged Musk's picture of himself as "the paladin of safety and regulation."
Yes, but: What hasn't yet been mentioned in this week's trial is the propensity of Musk's Grok chatbot to post racist messages, create nonconsensual images of adults and generate explicit images of children.
- OpenAI and Microsoft might be waiting to bring up Grok's behavior or might be avoiding it since chatbot behavior is so legally murky.
- Savitt hinted at Grok's troubles, suggesting that the chatbot had been trained on racist and sexist content. To which Musk replied, "Just because you may read something that is racist or sexist doesn't mean you'll become racist or sexist."
Zoom in: Savitt addressed Musk's concern about OpenAI's dedication to safety by asking him what he knew about the company's safety protocols.
- Musk responded that because the company sought to make a profit, it couldn't be safe. When pushed, Musk seemed to morph into his internet troll persona.
- Asked if he knew anything about OpenAI's "safety card," Musk smiled and replied: "Safety card? Why would it be a card?"
- "Safety card" is an informal way to refer to a system card, which documents a model's capabilities, limitations and safety evaluations. xAI calls its equivalent "model cards."
What's next: Musk's cross-examination continues tomorrow in Oakland, California.
2. AI's endless game of thrones
The AI industry has entered an era of perpetual upheaval where market leaders are crowned — and dethroned — every few months.
- Today's hottest company could be eclipsed by summer and the laggard could revolutionize the world.
Why it matters: As AI changes everything, keeping up with who's dominant and who's falling behind is becoming an existential question for investors, big businesses and regular users trying to guarantee their own futures.
- The wrong call can mean spending millions of dollars on a model that could be outdated by the end of the quarter — or spending hours learning a tool that will soon be obsolete.
The big picture: OpenAI looked unstoppable through last fall thanks to its first-mover advantage with ChatGPT.
- Then Google became the AI lab to beat as its Gemini models outperformed OpenAI's, allowing Alphabet to take market share from its competitor's consumer-facing business and win over investors with a cash moat.
- By spring, Anthropic had taken total control of the AI narrative, overtaking OpenAI in enterprise revenue after its coding tool went viral.
- Last week, OpenAI released GPT-5.5, which quickly ranked among the top models on key benchmarks. The company's Codex coding model has rapidly closed the gap with Anthropic's Claude Code.
- This week, the Wall Street Journal reported that OpenAI missed its own internal revenue and user targets just months ago — a reminder of how quickly the leader can become the laggard, and how quickly the laggard can climb back to the top.
3. Exclusive: Citi moves into agentic AI
Citi is rolling out a new internal AI platform that lets employees create agents, tapping into top models within one secure system that can scale those agents across the firm.
Why it matters: The AI race is playing out on Wall Street as much as it is in Silicon Valley, and banks are racing to offer the best AI models to employees without compromising on safety.
Driving the news: Citi's new platform, called Arc, acts as a centralized "operating system" for AI agents, CTO David Griffiths tells Axios.
- It lays the groundwork for the bank's biggest push into agentic AI, or the use of multiple autonomous agents to orchestrate and complete a task together. Arc will be rolled out to developers first and there are plans to roll it out to the broader bank over time.
4. Startup JuliaHub raises $65M to rival Simulink
Former Snowflake CEO Bob Muglia, a longtime Microsoft server boss, is backing JuliaHub, a startup that sees a role for AI agents designing complex products such as cars and airplanes.
Why it matters: JuliaHub is betting AI plus Julia, the open-source technical computing language, can challenge Simulink, MathWorks' decades-old tool for modeling and simulating complex systems.
5. Training data
- Investors are ready for CFOs to start explaining the return on their AI spending. (Axios)
- Families who say chatbots harmed their children urged Congress to pass strict safeguards. (Axios)
- Senators demand answers from tech companies where employees with ties to China could access cutting-edge U.S. AI systems. (Axios)
- Anthropic is weighing a new round of funding that could see it valued at more than rival OpenAI and upward of $900 billion. (Bloomberg)
- Google's Gemini can now generate a wider range of file types, including Microsoft Office. (Engadget)
6. + This
For her thesis, Alanna Okun created Loose Ends, a video game about knitting. She also knitted an amazing cover to go on the machine it ran on.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+







