Axios AI+

A floating, translucent blue 3D render of the human brain.

November 08, 2023

Ina here, getting ready for Axios' first AI+ Summit in San Francisco this afternoon, where we'll hear from industry leaders across technology, entertainment and business on the opportunities and risks of the AI revolution — including Meta vice president of generative AI Ahmad Al-Dahle, filmmaker and author Justine Bateman, C3.ai CEO Tom Siebel and other experts.

Register to livestream the event here.

Today's AI+ is 1,288 words, a 5-minute read.

1 big thing: Experts don't trust tech CEOs on AI

Data: Axios-Generation Lab-Syracuse University; Chart: Axios Visuals
Data: Axios-Generation Lab-Syracuse University; Chart: Axios Visuals

Dishonest, untrustworthy and disingenuous — that's how a majority of experts surveyed from leading universities view AI companies' CEOs and executives, Axios' Margaret Talev and Ryan Heath report.

What's happening: 56% of computer science professors at top U.S. research universities surveyed by Axios, Generation Lab and Syracuse University described the corporate leaders as "extremely disingenuous" or "somewhat disingenuous" in their calls for regulation of AI.

Why it matters: The latest Axios-Generation Lab-Syracuse University AI Experts Survey shows how deep the divide has grown between those who make and sell AI and those who study and advance it.

The big picture: Some critics of Big Tech have argued that leading AI companies like Google, Microsoft and Microsoft-funded OpenAI support regulation as a way to lock out upstart challengers who'd have a harder time meeting government requirements.

  • Our survey suggests that this perspective is shared by many computer science professors at top U.S. research universities.

Context: U.S. policymakers rely on help from tech companies and their leaders to shape the rules for protecting individuals' safety, freedoms and livelihoods in the AI era.

  • Top tech executives have been meeting in closed-door sessions with U.S. senators in an unusual push for their own regulation.

The intrigue: Survey respondents weighed in on several other provocative ideas.

  • 55% favor or lean toward the idea of the federal government creating a national AI stockpile of chips through the Defense Production Act to avert future shortages.
  • 85% said they believe AI can be at least somewhat effective in predicting criminal behavior — but only 9% said they believe it can be highly effective.
  • One in four say AI will become so advanced at medical diagnoses that it will generally outperform doctors.

By the numbers: Asked to prioritize just one dimension of AI regulation, "misinformation" was the respondents' top concern (34%) followed by "national security" (20%), while"job protection" (5%) and "elections" (4%) came last.

  • 62% said misinformation is the biggest challenge in maintaining the credibility and authenticity of news in an environment that includes AI-generated articles.
  • 95% assessed AI's current deepfake capability as "advanced" when it comes to video and audio content, with 27% saying it's "highly advanced, indistinguishable from real content" and 68% saying it's "moderately advanced, with some imperfections."

Yes, but: 72% of respondents were "extremely optimistic" or "somewhat optimistic" about "where we will land with AI in the end."

What they're saying: "You have the people that can look under the hood at what these companies are churning out into society at a historic scale, and that's the conclusion they've come out with — that they're worried about the intentions of the men running the machines," said Cyrus Beschloss, CEO of Generation Lab.

How it works: The survey includes responses from 216 professors of computer science at 67 of the top 100 U.S. programs.

More on our survey's methodology ... dive into the results.

2. Behind the Curtain: AI architects' greatest fear

Illustration of a curtain with a tassel in the shape of the Axios logo

Illustration: Sarah Grillo/Axios

Brace yourself: You will soon need to wonder if what you see — not just what you read — is real across every social media platform, Axios' Jim Vandehei and Mike Allen write in their "Behind the Curtain" column.

Why it matters: Open AI and other creators of artificial intelligence technologies are close to releasing tools that make easy — almost magical — creation of fake videos ubiquitous.

One leading AI architect told us that in private tests, they can no longer distinguish fake from real — something they didn't expect would be possible so soon.

  • This technology will be available to everyone — including bad actors internationally — as soon as early 2024.
  • Making matters worse, this will hit when the biggest social platforms have cut the number of staff policing fake content. Most have weakened their policies to curb misinformation.

The big picture: Just as the 2024 presidential race hits high gear, more people will have more tools to create more misinformation or fake content on more platforms — with less policing.

  • A former top national security official told us that Russia's Vladimir Putin sees these tools as an easy, low-cost, scalable way to help tear apart  Americans.
  • U.S. intelligence shows Russia actively tried in 2020 to help re-elect former President Trump. Top U.S. and European officials fear Putin will push for a 2024 win by Trump, who wants to curtail U.S. aid to Ukraine.

Yes, the White House and some congressional leaders want regulations to call out real versus fake videos. The top idea: mandating watermarking so it'll be clear what videos are AI-generated.

  • But researchers have tried that. The tech doesn't work yet.
  • In any case, deciding which content is "AI-generated" is rapidly becoming impossible, as the tech industry rolls AI into every product used to create and edit media.

"Of course, it's a worry," said Reid Hoffman, co-creator of LinkedIn and forceful defender of AI.

Sam Altman, co-founder and CEO of Open AI, told us: "This is an important near-term risk for the industry to address. We need a combination of responsible model deployment and public awareness."

Reality check: The best self-policing in the world won't stop the faucet of fake. The sludge will flow. Fast. Furiously.

  • It could get so bad that some AI architects told us they're pushing to speed up the release of powerful new versions so the public can deal with the consequences — and adapt — long before the election.

A senior White House official told us officials' biggest concern is the use of this technology and other AI capabilities to dupe voters, scam consumers on a massive scale and carry out cyberattacks.

3. Figma, with Adobe deal in limbo, adds AI smarts

A screenshot of new AI features in Figma's FigJam collaboration app

Image: Figma

Design firm Figma announced on Tuesday some new generative AI capabilities for FigJam, its tool for collaborative idea generation.

Why it matters: The rush to incorporate generative AI into software is in high gear, while Figma is in a state of limbo — awaiting regulatory approval for a deal to sell itself to Adobe for $20 billion.

"Obviously, we're very focused on making this deal happen, but we're not standing still and the team is running faster than ever," Figma CEO Dylan Field told Axios.

Details: Figma is using AI to help generate new ideas as well as to sort and summarize the suggestions within a FigJam document.

  • The AI features are in beta and currently free for all customers.
  • Figma is using OpenAI's GPT-4 for now, but says that could change.

Between the lines: Field says many companies are essentially putting "lipstick on a pig" when it comes to using new AI features.

  • "It's like oh, yeah, like same product, but like now with AI," Field said. "For us, we really try to take a holistic view."
  • Field said the AI features being added to FigJam, as well as those coming to other products, are the result of internal work and hackathons exploring how AI could be most useful supporting the ways people already use Figma.

4. Training data

  • Meta announced it would require political advertisers to disclose any time they used images or videos that were created or altered using AI or digital tools. (Meta Blog)
  • Meanwhile, Microsoft announced measures to protect elections, including new tech that allows political campaigns to digitally sign and authenticate media to prevent the spread of misinformation. (The Microsoft Blog)
  • A former Facebook employee testified before Congress that Mark Zuckerberg and other execs ignored warnings about social media harms to teens for years. (CNN)
  • The Eric Schmidt-backed Special Competitive Studies Project and the Johns Hopkins Applied Physics Laboratory on Tuesday released a new framework for regulators to help understand possible harms from and benefits of AI. (Axios)

5. + This

Check out this coast-to-coast layup from freshman MiLaysia Fulwiley during South Carolina's women's hoops opener in Paris on Monday.

Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter.