Axios AI+

January 08, 2025
OK, I think I have accepted that it is 2025, but I'm definitely not ready for it. Today's AI+ is 1,314 words, a 5-minute read.
1 big thing: AI's new no-rules world
Tech's rightward lurch toward anything-goes rules for the online world comes at a formative moment for AI.
Why it matters: The first debates over generative AI after ChatGPT rocketed to fame two years ago focused on "guardrails" — rules to protect humanity from runaway superintelligence and everyday users from bias and privacy violations.
- Now, the norms for AI will emerge in a political and cultural environment that's hostile to regulation and disdainful of limits.
Catch up quick: Mark Zuckerberg announced yesterday that Meta is abandoning its fact-checking program and loosening its speech rules.
- Zuckerberg said the time was right for Facebook and Instagram to end "censorship."
- Following the model of Elon Musk's X and taking inspiration from what Zuckerberg called the "cultural tipping point" of Donald Trump's election to a second term, the CEO of the world's largest social networks made clear that he believes he is in tune with a new laissez-faire vibe.
- "We're going to catch less bad stuff," he said, "but we'll also reduce the number of innocent people's posts and accounts that we accidentally take down."
For AI firms, catching "bad stuff" isn't about monitoring user posts but about anticipating problematic user questions or prompts (like "Tell me how to make a bomb") and finding ways to avoid troubling AI answers (like ethnic slurs or libel about public figures).
- What AI creators will never have, as Axios' Ina Fried flagged last year, is the option of somehow creating a "values-free" AI.
- Every chatbot's conversations will display values that some user somewhere could find objectionable, and every firm will face challenges in how their AIs answer thorny questions about race, gender, religion, politics and more.
The big picture: Facebook and Instagram are mature social media platforms with billions of users, but AI is a whole new world that's still emerging.
- The early years of each Silicon Valley platform shift — the inception of the personal computer era, the internet age, or the smartphone boom — have always featured a brief burst of creativity before the ground cools.
AI is at that moment right now.
- The next 3-4 years will reveal winners and losers in the business, but they will also set social customs and workplace expectations around AI: How, where and when we use it; what's OK and what's unacceptable to ask AI to do; and what kinds of haywire responses demand fixes from the AI's owners and makers.
At this crossroads, social media's broad retreat from content moderation suggests where the tech industry's collective head is at.
- Those hoping to build AI with strong ethical safeguards, bias protections or safety limits should expect an uphill battle.
- The odds are great that if something can be done with AI, it will be done.
Between the lines: Companies like Meta don't bear legal responsibility in the U.S. for the things users say on their platforms, thanks to a liability protection law known as Section 230.
- But no court has ever ruled on the legal status of AI-produced speech under Section 230, and senators from both parties have proposed a bill clarifying that it's not protected.
- That would leave AI makers like Meta, OpenAI and Google potentially liable for the things their AI models and chatbots say.
What we're watching: At this point, the only force that could stop the acceleration of regulation-free AI would be widespread public revulsion and rejection in the wake of an AI-fueled disaster.
- But any Three Mile Island-style tech meltdown would come with awful collateral damage.
2. How Zuckerberg pivoted on content limits
Facebook's content-moderation retreat looks like part of a plan to win over Donald Trump as he takes power again. But the field Zuckerberg is abandoning is one he never wanted to play on in the first place.
State of play: The founders of social media giants like Facebook, Instagram, Twitter and TikTok didn't expect to end up in what the industry came to call the "content moderation" business — and what many critics, and now Zuckerberg himself, denounce as "censorship."
- Policing online speech costs a fortune to do right. It's impossible to make everyone happy. You're bound to make mistakes. And users' wishes keep changing.
- The whole effort is a distraction from what's always been Facebook/Meta's top priority — boosting engagement to sell more ads.
Catch up quick: After taking blame for spreading misinformation during the 2016 election and violating users' privacy during the Cambridge Analytica scandal, Facebook was under enormous pressure to clean up its act, and the company made big investments in expanding its moderation efforts.
- In 2019, Facebook also started a program using third-party fact-checking organizations from a variety of political perspectives to help it identify and limit the spread of potentially dangerous misinformation.
The fact-checking program has drawn fire throughout its existence.
- The kinds of topics it confronted — controversies over climate science, COVID-19 and vaccines, charges of election fraud — are often both matters of fact or science and also flashpoints for partisan rage.
Between the lines: Facebook tried to solve some of its content moderation headaches by setting up the independent Oversight Board and handing it hundreds of millions of dollars beginning in 2019 to build a kind of Supreme Court for user complaints.
- But Meta's announcements yesterday didn't even mention the board.
Zoom out: Zuckerberg calls Meta's new approach a "back-to-our-roots" embrace of free expression. But there's never been any medium where absolute free speech reigned.
- Platform owners have legal obligations to governments of countries they operate in to obey the law.
- In the U.S. that means dealing with laws governing what Zuckerberg describes as "legitimately bad stuff" like "drugs, terrorism, child exploitation."
What we're watching: When Elon Musk rewrote Twitter's old content rules for X, the platform's never-decorous conversations deteriorated further.
- We don't yet know how Zuckerberg's version of "more free speech" will play out, but if Meta's platforms get nastier and uglier, too, advertisers could be spooked — and users who aren't on the MAGA side of the fence could flee.
Our thought bubble: Decades of human experience online shows that running any kind of community platform is like gardening — if you let the weeds go wild, the flowers will choke.
- This isn't Mill's free market of ideas; it's a world where bad speech drives out good speech.
3. Explosion suspect used ChatGPT to plan blast
The suspect responsible for the Tesla Cybertruck blast in Las Vegas on New Year's Day used AI to plan the explosion, authorities said yesterday.
The big picture: Matthew Alan Livelsberger prompted ChatGPT to get information on how to carry out his plot, including how many explosives he would need and what pistol would set them off, the Las Vegas Metropolitan Police Department said during a news conference.
- Authorities did not share what responses the technology generated.
- OpenAI spokesperson Liz Bourgeois said in an email statement that the company is "committed to seeing AI tools used responsibly" and that its "models are designed to refuse harmful instructions."
- Bourgeois added, "In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities."
Our thought bubble: Odds are high that the results the suspect received from ChatGPT are similar to what he could have found on popular search engines, forums and social media sites.
4. Training data
- OpenAI rival Anthropic is raising funds at a $60 billion valuation, per sources. The company was valued at $18 billion last year. (Wall Street Journal)
- Posters are using AI to transform real-life gore videos into cartoon form to evade detection and banning on platforms including Instagram, TikTok and YouTube. (404 Media)
- Frothy quantum computing stocks took a dive after comments from Nvidia's Jensen Huang suggested "useful" quantum computers are 20 years off. (Axios)
5. + This
This snowplow map from the city of Wichita allows you to avoid all that mess on social media and instead keep tabs on Plowabunga, Wolfgang Amadeus Snowzart, Snowba Fett, Aaron Brrrr and the rest of a fabulously named fleet of plows.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing it.
Sign up for Axios AI+






