Where Silicon Valley antitrust chatter is heading
Most regulatory discussion around artificial intelligence so far has been about risk, privacy and the potential for catastrophe, but a traditional tech policy debate is slowly bubbling up around AI: competition.
Driving the news: At a lunch roundtable discussion hosted by Ashley and Maria in San Francisco last week during the Axios AI+ Summit, apprehension was palpable that the biggest and most well-resourced AI firms could have unfair power and dominance over the market and regulations.
Why it matters: It took years for U.S. regulators to try to apply traditional antitrust law to tech platform dominance, with federal cases ongoing and proposals from Congress struggling to pass.
- For AI, there's a growing sense that what works for Big Tech isn't going to work for the rest of the industry.
- Some schools of thought that are emerging from Silicon Valley to Washington say anti-monopoly ideals will be key to successful and safe AI development.
Context: "For all of the interest in regulating AI, there has been comparatively little discussion of AI's industrial organization and market structure," Ganesh Sitaraman and Tejas Narechania of the Vanderbilt Policy Accelerator wrote in a recent paper.
- "This is surprising because critical layers in the AI technology stack are already highly concentrated."
- The authors argue throughout the paper that policymakers should use "antimonopoly tools" to regulate harms from concentration in AI.
- Such tools, per the authors, include public utilities, structural separation of companies, nondiscrimination requirements and interoperability requirements, changes to procurement and public access to data and cloud resources.
What they're saying: Some venture capitalists have already taken issue with President Biden's executive order on AI, saying it could restrict open-source AI by giving some smaller companies the same requirements as bigger companies with far more resources.
- "The EO overlooks our primary concern: ensuring AI remains open and competitive," reads a letter from Andreessen Horowitz, Y Combinator, Meta, Databricks, Shopify, Hugging Face and others.
- "Reporting regimes like the one set forth in the EO threaten to undermine the next generation of AI innovation."
Of note: Former Biden administration antitrust staffer Tim Wu wrote in the New York Times last week that certain AI requirements may entrench incumbents.
- "The strictest regulation of A.I. would result in having only companies like Google, Microsoft, Apple and their closest partners competing in this area. It may not be a coincidence that those companies and their partners have been the strongest advocates of A.I. regulation."
- FTC chair Lina Khan has been vocal about dominance by big firms in the AI space, recently stopping by Y Combinator to talk about it, saying open source software could be the key to competition.
The intrigue: That open vs. closed software debate is increasingly emerging as a key point around AI and competition, notably exemplified by Meta choosing to make its Llama 2 model partially open source and available for public use while much of OpenAI's code is proprietary.
- Access to compute power also creates have- and have-nots in the AI startup landscape. Only the biggest, most powerful companies can afford the resources and power it takes to train powerful AI models.
The bottom line: Concerns about competition and AI are the next chapter in an ongoing debate over the role the biggest firms should play in releasing, controlling and monetizing the newest wave of technology, and how people use and abuse it.
- Tactics that could lead to regulatory capture — "self-preferencing," using size and resources to extend into new markets, keeping algorithms secret and heavy lobbying — are the same, even if the tech itself is different.