AI's new no-rules world
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Tech's rightward lurch toward anything-goes rules for the online world comes at a formative moment for AI.
Why it matters: The first debates over generative AI after ChatGPT rocketed to fame two years ago focused on "guardrails" — rules to protect humanity from runaway superintelligence and everyday users from bias and privacy violations.
- Now, the norms for AI will emerge in a political and cultural environment that's hostile to regulation and disdainful of limits.
Catch up quick: Mark Zuckerberg announced Tuesday that Meta is abandoning its fact-checking program and loosening its speech rules.
- Zuckerberg said the time was right for Facebook and Instagram to end "too much censorship," "make fewer mistakes" in removing content, and "tune our content filters" to remove less material.
- Following the model of Elon Musk's X and taking inspiration from what Zuckerberg called the "cultural tipping point" of Donald Trump's election to a second term, the CEO of the world's largest social networks made clear that he believes he is in tune with a new laissez-faire vibe.
- "We're going to catch less bad stuff," he said, "but we'll also reduce the number of innocent people's posts and accounts that we accidentally take down."
For AI firms, catching "bad stuff" isn't about monitoring user posts but about anticipating problematic user questions or prompts (like "Tell me how to make a bomb") and finding ways to avoid troubling AI answers (like ethnic slurs or libel about public figures).
- Many AI makers will struggle conscientiously to fail less. But they will surely be competing against rivals who save time and money by not bothering.
What AI creators will never have, as Axios' Ina Fried flagged last year, is the option of somehow creating a "values-free" AI.
- Every chatbot's conversations will display values that some user somewhere could find objectionable, and every firm will face challenges in how their AIs answer thorny questions about race, gender, religion, politics and more.
The big picture: Facebook and Instagram are mature social media platforms with billions of users, but AI is a whole new world that's still emerging.
- The early years of each Silicon Valley platform shift — the inception of the personal computer era, the internet age, or the smartphone boom — have always featured a brief explosion of creativity before the ground cools.
- Startups and incumbent firms vie to figure out what users want and will pay for, what works and what doesn't, what's exciting to do with the new thing and what's a flop.
AI is at that moment right now.
- The next three or four years will reveal winners and losers in the business, but they will also set social customs and workplace expectations around AI: How, where and when we use it; what's OK and what's unacceptable to ask AI to do; and what kinds of haywire responses demand fixes from the AI's owners and makers.
At this crossroads, social media's broad retreat from content moderation suggests where the tech industry's collective head is at.
- Those hoping to build AI with strong ethical safeguards, bias protections or safety limits should expect an uphill battle.
- The odds are great that if something can be done with AI, it will be done.
Between the lines: Companies like Meta don't bear legal responsibility in the U.S. for the things users say on their platforms, thanks to a liability protection law known as Section 230.
- But no court has ever ruled on the legal status of AI-produced speech under Section 230, and senators from both parties have proposed a bill clarifying that it's not protected.
- That would leave AI makers like Meta, OpenAI and Google potentially liable for the things their AI models and chatbots say.
- New laws in the EU, and potential state regulations in the U.S., could also give pause to those experimenting with AI at the extreme end of the carelessness spectrum.
Yes, but: Most of the companies building AI, from Microsoft to Google to Meta, are so wealthy and powerful that they can move right past paying fines that would bankrupt smaller firms.
- Key smaller innovators like OpenAI and Anthropic are, in theory, committed to "safe AI" — but in practice, they've been flooring the pedal as hard as their bigger rivals.
What we're watching: At this point, the only force that could stop the acceleration of regulation-free AI would be widespread public revulsion and rejection in the wake of an AI-fueled disaster.
- But any Three Mile Island-style tech meltdown would come with awful collateral damage.
