Expert survey: Don't trust tech CEOs on AI
Dishonest, untrustworthy and disingenuous — that's how a majority of experts surveyed from leading universities view AI companies' CEOs and executives.
What's happening: 56% of computer science professors at top U.S. research universities surveyed by Axios, Generation Lab, and Syracuse University described the corporate leaders as "extremely disingenuous" or "somewhat disingenuous" in their calls for regulation of AI.
Why it matters: The latest Axios-Generation Lab-Syracuse University AI Experts Survey shows how deep the divide has grown between those who make and sell AI and those who study and advance it.
The big picture: Some critics of Big Tech have argued that leading AI companies like Google, Microsoft and Microsoft-funded OpenAI support regulation as a way to lock out upstart challengers who'd have a harder time meeting government requirements.
- Our survey suggests that this perspective is shared among many computer science professors at top U.S. research universities.
Context: U.S. policymakers are relying on help from tech companies and their leaders to shape the rules for protecting individuals' safety, freedoms and livelihoods in the AI era.
- Top tech executives have been meeting in closed-door sessions with U.S. senators in an unusual push for their own regulation.
- But there's a lack of consensus on transparency, concerns about financial self-interest and reputation issues around controversial figures such as Elon Musk and Mark Zuckerberg.
The intrigue: Survey respondents weighed in on several other provocative ideas.
- 55% favor or lean toward the idea of the federal government creating a national AI stockpile of computing chips through the Defense Production Act to avert future chip shortages.
- 85% said they believe AI can be at least somewhat effective in predicting criminal behavior — but only 9% said they believe it can be highly effective.
- One in four say AI will become so advanced when it comes to medical diagnoses that it will generally outperform doctors.
By the numbers: Asked to prioritize just one dimension of AI regulation, "misinformation" was respondents' top concern (34%) followed by "national security" (20%), while "job protection" (5%) and "elections" (4%) came last.
- 62% said misinformation is the biggest challenge to maintaining the credibility and authenticity of news in an environment that includes AI-generated articles.
- 95% assessed AI's current deepfake capability as "advanced" when it comes to video and audio content, with 27% saying "highly advanced, indistinguishable from real content" and 68% saying "moderately advanced, with some imperfections."
Yes, but: 72% of respondents were "extremely optimistic" or "somewhat optimistic" about "where we will land with AI in the end."
What they're saying: "You have the people that can look under the hood at what these companies are churning out into society at a historic scale, and that's the conclusion they've come out with — that they're worried about the intentions of the men running the machines," said Cyrus Beschloss, CEO of Generation Lab. "We should take that super-seriously."
How it works: The survey includes responses from 216 professors of computer science at 67 of the top 100 U.S. programs, as defined by SCImago Journal rankings.
- A survey of experts does not necessarily reflect the views of the population at large. It is different from a poll, which looks at a random sample of 1,000 or more U.S. adults and carries an estimated margin of error.
- The computer science professors surveyed are not a representative sample of the wider population — and while experts' views may sometimes track with the general population, they may differ if their views are shaped more by their understanding of technology than by expertise in politics, media or other realms.
- Experts from domains beyond computer science were not included in this survey, but they bring important perspectives to debates over AI as well.
Methodology: This Axios-Generation Lab-Syracuse University AI Experts Survey was conducted Oct. 25-30, 2023, with an online survey distributed by email.
- A listing of the participating institutions and additional details about the methodology may be found at the survey site.