Axios AI+

June 24, 2025
Apologies for missing that yesterday was National Typewriter Day, an embarrassing oversight for someone who has three actual and two Lego typewriters. Today's AI+ is 1,090 words, a 4-minute read.
1 big thing: Musk's thumb on history's scale
Elon Musk still isn't happy with how his AI platform answers divisive questions, pledging in recent days to retrain Grok so it will answer in ways more to his liking.
Why it matters: Efforts to steer AI in particular directions could exacerbate the danger of a technology already known for its convincing but inaccurate hallucinations.
The big picture: Expect more of Musk's thumb-on-the-scale approach, as governments and businesses build and embrace AI models with preferred responses on hot-button topics from LGBTQ rights to territorial disputes.
Driving the news: In a series of tweets over the past week, Musk has expressed frustration at the ways Grok was answering questions and suggested an extensive effort to manipulate its output.
- "We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors," he wrote on Saturday. "Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data."
- Musk also put out a call for people to suggest things that are "divisive facts," adding that he meant things that are "politically incorrect, but nonetheless factually true." The suggestions, though, included examples of Holocaust denialism and other conspiracy theories.
- An xAI representative did not respond to a request for comment.
Reality check: AI models are already hallucinating in ways that suggest failed attempts by company staff to manipulate outputs.
- Last month, Grok started injecting references to "white genocide" in South Africa to unrelated conversations, which the company later attributed to an "unauthorized change" to its system.
- At the other end of the political spectrum, Google and Meta appeared to make an effort to correct for a lack of diversity in image training data, which resulted in AI generated images of Black founding fathers and racially diverse Nazis.
Between the lines: These early stumbles highlight the challenges of tweaking large language models, but researchers say there are more sophisticated ways to inject preferences that could be both more pervasive and harder to detect.
- The most obvious way is to change the data that models are trained on, focusing on data sources that align with one's goals.
- "That would be fairly expensive but I wouldn't put them past them to try," says AI researcher and Humane Intelligence CEO Rumman Chowdhury, who worked at Twitter until Musk dismissed her in November 2022.
- AI makers can also adjust models after training them, using human feedback to reward answers that reflect the desired output.
- A third way is through distillation, a popular process for creating smaller models based on larger ones. Creators can take the knowledge of one model and create a smaller one that aims to offer an ideological twist on the larger one.
What they're saying: AI ethicists say the problem extends well beyond Musk and Grok. Many companies have been exploring how they can tweak answers to appeal to users, regulators and other constituencies.
- "These conversations are already happening," Chowdhury tells Axios. "Elon is just dumb enough to say the quiet part out loud."
- Chowdhury says his comments should be a wakeup call that AI models are in the hands of a few companies with their own set of incentives that may differ from those of the people using their services.
- "There's no neutral economic structure," Chowdhury says, suggesting that instead of asking companies to "do good" or "be good," perhaps powerful AI models should be treated similar to utilities.
Yes, but: Efforts to scour all bias from generative AI are doomed because the human data AI trains on is full of bias.
- The training data reflects biases based on whose perspectives are over or underrepresented. There's also a host of decisions large and small made by model creators as well as other variables.
- Meta, for example, recently said it wants to remove bias from its large language models, but experts say the move is more about catering to conservatives than achieving some breakthrough in model neutrality.
Bottom line: Ultimately — as we reported over a year ago — it boils down to a battle over what values powerful AI systems will hold.
2. Scoop: AI rules freeze is now a "pause"
A 10-year freeze on state AI regulations has been updated in the Senate reconciliation bill as it inches closer to the finish line.
Why it matters: The changes to the AI provision were crucial to satisfy the Byrd Rule, which says only issues related to the budget can move through reconciliation legislation.
What's inside: The provision is now called a "temporary pause" rather than a "moratorium," according to bill text seen by Axios.
- Senate Commerce Chair Ted Cruz (R-Texas) made broadband grants contingent on whether states were pursuing AI regulations.
- His office did not respond to requests for comment.
What they're saying: The new language sets aside $25 million for master services agreements for AI infrastructure, which sources familiar with the issue said helped the provision survive.
Despite language tweaks, the intent is to make it as difficult as possible for states to choose between AI laws and potentially billions of dollars in federal broadband funding, one Democratic staffer said.
- The staffer added that language includes "automated decision systems," making the ban on laws much broader than it seems.
State of play: Groups including Moms Against Media Addiction, Common Sense Media and Public Citizen have been circulating a petition against the AI moratorium that had over 60,000 signatures as of yesterday morning.
- Common Sense Media is particularly concerned with unregulated AI's effects on young people and the rise of AI companions.
The bottom line: There's fierce bipartisan pushback to the move and doubts that it will survive in the bill's final iteration.
- Sen. Ed Markey (D-Mass.) posted on X on Sunday: "My amendment to strip the AI moratorium from the reconciliation bill is ready to go. I urge other members to join me and block this dangerous provision."
- Sen. Maria Cantwell (D-Wash.) will lead an amendment process as well, according to one Democratic staffer.
- Republican senators including Marsha Blackburn of Tennessee and Josh Hawley of Missouri said they don't support the provision.
3. Training data
- Runway is among the firms Meta held recent talks with as it looks to expand its AI research efforts, but the talks are not active, sources said. (Bloomberg)
- Databricks hired former Commerce Department official and NTIA administrator Alan Davidson as its head of government affairs.
- Microsoft detailed Mu, the language model it is using to power a new AI-powered feature to find and control various settings within Windows.
4. + This
Check out actor Steve Carrell's Northwestern University commencement address, which included a dance break.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and to Anjelica Tan for copy editing.
Sign up for Axios AI+





/2025/06/23/1750688261418.gif?w=3840)