Dec 16, 2023 - Business

Why Silicon Valley doesn't agree on AI's future

Illustration of three-dimensional binary code appearing to be split by graphic shapes and surrounded by a cut-up one dollar bill

Illustration: Annelise Capossela/Axios

The divide among venture capitalists over the development of artificial intelligence is becoming increasingly crystalized.

Why it matters: "It's the new internet," VCs say, regardless of which camp they fall into.

Driving the news: On Thursday, venture firm Andreessen Horowitz announced it plans to start donating to members of Congress who are aligned with its pro-technology aims.

  • "We are non-partisan, one-issue voters: If a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them," writes co-founder Ben Horowitz.

The big picture: A number of prominent tech investors and entrepreneurs have trumpeted the need to ensure that AI is developed with societal guardrails in mind. But others have loudly pushed back.

  • And especially on the idea that we need new, technology-specific regulations.

Zoom in: The key disagreement is around how constrained AI's development should be.

  • Existential risk: Whether AI could likely one day become so powerful it could become a threat to humanity, or whether that's just an exaggerated science fiction idea.
  • Safety: Some view safety as paramount to ensure that AI systems aren't built in ways that will entrench societal biases, and to protect from the above existential risk. Others tend to see safety debates as mostly censorship, and a drag on tech development.
  • Regulation: The disagreement is largely around whether we need AI-centric regulations that will control and restrict the technology itself. Some folks see this as an extension of censorship and stifling of innovation.

Between the lines: Nothing illustrated the divide more than the recent dramatic firing and re-hiring of Sam Altman as CEO of OpenAI.

  • In addition to the mounting tension between Altman and some of the other board members over prioritizing business expansion, the episode culminated with a redesigned board that now includes an observer seat for Microsoft, OpenAI's biggest investor.
  • Several of OpenAI's investors were at the center of the push to get him re-hired.
  • "To be clear, Khosla Ventures wants [Altman] back... but will back him in whatever he does next," investor Vinod Khosla posted on X amid the tumult.

Meanwhile: The Responsible Innovation Labs, a self-regulation effort established by venture firms like General Catalyst, announced AI guidelines developed with U.S. Commerce Department feedback and saw criticism and pushback from the pro-development crowd.

And that brings us to the culture war: With Effective Altruism becoming the poster child for extreme concern over unrestrained AI development, a counter-movement has emerged: e/acc, or "effective accelerationism."

  • Its adherents accuse the other camp of being Luddites who want to imperil humanity by denying it the benefits of AI and advanced technology more broadly.
  • They're hosting private meetups, communicating via internet memes, and vocally supporting techno-optimism.

Yes, but: They're all venture capitalists.

  • Their incentives are more aligned with each other than any cultural and investment feud may make it seem.
  • And as always, signaling their views and values is always about marketing their firms to prospective portfolio companies and allies.

Editor's note: The story has been corrected to reflect that the Commerce Department worked with Responsible Innovation Labs on AI guidelines but is not part of RIL.

Go deeper