Axios AI+

June 03, 2024
I'm headed to New York today for Axios' AI+ Summit on Wednesday. You can join me there via livestream — register here. Also, congrats to Harvey on finishing elementary school!
Today's AI+ is 1,209 words, a 4.5-minute read.
1 big thing: AI safety's partisan battlefield
Making AI safe, once a consensus goal for the industry, has become an ideological battleground.
Why it matters: Like "election integrity" in politics, everyone says they support "AI safety" — but now the term means something different depending on who's saying it.
Driving the news: The noisy departure of the head of OpenAI's "superalignment" team, charged with limiting any harm from advanced AI, reignited a long-running Silicon Valley debate on AI safety.
- Critics say the industry's push to popularize AI is eclipsing its promises to develop the technology responsibly.
- OpenAI CEO Sam Altman has long argued, and now most AI makers agree, that the best way to surface and defuse AI's many potential misuses is to put it into the general public's hands.
Zoom out: Safe AI has multiple meanings that cover a range of dangers.
- No one wants AI going off on its own and plotting to wipe out humankind.
- Few of us want AI spreading harmful information or misinformation — like accurate instructions for making bioweapons or inaccurate labels for toxic mushrooms.
- Most of us don't want AI discriminating against people based on traits like their skin color or their gender.
- Most of us would like AI to provide a fact-based record of historical and current events.
The phrase "AI safety" first came into use a decade ago with the rise of concern among researchers about AI's "existential risks." Many feared an advanced AI would develop its own agenda hostile to humanity (like "maximize paper clip output"), become deceptive over time and end up destroying civilization.
- That was something to maybe try to avoid — even if the doomsday scenarios were vague and far-fetched. So the original AI safety agenda aimed at avoiding any kind of paper clip apocalypse.
As AI began moving from the lab to our laptops, a different sort of risk emerged: Ethics specialists and social researchers sounded alarms about the prevalence of bias in AI algorithms.
- With AI going to work in law enforcement, credit-risk management and employment screening, your loan, job or even your freedom could be imperiled by AI programs that misread your skin color or gender — or penalize you for them.
The rise of ChatGPT and generative AI in 2022 brought a new kind of safety risk to the fore.
- Suddenly, AI trained on essentially the entire internet was moving into our lives to answer our questions and invent pictures and stories.
- The internet is full of both wonders and horrors. ChatGPT and its competitors reflected both.
- If you wanted to stop your AI from telling lies about QAnon, Barack Obama's birthplace or COVID-19 vaccines' safety, you had to do something.
Enter "guardrails." To retrain the foundation models that drive the AI revolution so they're grounded in fact would take many months and dollars.
- Silicon Valley firms racing to deploy and profit from genAI weren't willing to do that. So they added patchy fixes to combat bias, lies and hate speech.
- The unpredictable, "black box" nature of genAI meant that these guardrails would only be partially effective.
Case in point: You might want to make sure your image generator didn't only portray professionals with white skin.
- But if you turned up the knobs on your guardrails too high, you might end up with an all-Black portrait of the U.S.'s founding fathers.
To the right, such overzealous guardrails became proof that the AI created by tech giants and leaders like OpenAI and Google had become "politically correct" or "woke" and could not be trusted.
- Elon Musk has led the effort to rebrand AI safety to mean removing guardrails that limit AI speech to avoid antisemitism, racism and other offenses.
- Musk and his allies see such efforts as symptoms of a "woke mind virus" that seeks to censor the truth.
AI "should not be taught to lie," Musk said last month in a talk at the Milken Institute. "It should not be taught to say things that are not true. Even if those things are politically incorrect."
- Musk's AI project, xAI, is following in the tracks of his effort to reshape Twitter, now X, as a "free speech zone" that's more tolerant of fringe and extremist views and less concerned about avoiding offense or harm to users and society.
- If you believe that censorship is a greater danger than hate speech, you can call such an approach a form of "safety." (Musk has not hesitated to limit the speech of users on X when they criticize him or his companies.)
Our thought bubble: The U.S. public is sharply divided on so many issues of fact today — from the inflation rate to the outcome of the 2020 election — that expecting AI to determine or report "the truth" seems hopelessly naive.
What's next: The struggle over AI safety will play out around the globe, as governments in China, India and other nations adapt the technology to nationalist or authoritarian agendas using the rhetoric of risk reduction.
Go deeper: There's no such thing as "values-free" AI
2. Exclusive: AI isn't a daily habit yet for teens


Young Americans are quickly embracing generative AI as a tool, but few have made it a part of their daily lives, according to new data shared exclusively with Axios from Common Sense Media, Hopelab and the Harvard Graduate School of Education's Center for Digital Thriving.
Why it matters: Since the rise of the web 30 years ago, young users have typically adopted and shaped each new dominant tech platform.
By the numbers: The survey of 1,274 U.S.-based teens and young adults, conducted in October and November 2023, found that only 4% of respondents, all aged 14-22, said they use AI tools daily or almost daily.
- 41% said they've never used AI, and another 8% said they don't know what AI tools are.
- The two most common uses for AI, the survey found, were getting information (53%) and brainstorming (51%).
- 40% of white respondents said they used the technology for help with schoolwork. That number was 62% for Black respondents and 48% for Latinos.
The big picture: A 41% plurality said they expect AI to have both positive and negative impacts over the next 10 years.
- But the number of LGBTQ+ respondents who expected mostly negative impacts was significantly higher (28%) than cisgender/straight respondents (17%).
The survey also asked an open-ended question about respondents' thoughts on AI.
Young people want adults to know that "the world is changing," "we are the future," and "AI is the future." Some are concerned, saying, "AI is very creepy," and "AI concerns me," while others are optimistic, sharing sentiments like, "I really cannot wait to see how it evolves in the future."
—Teen and Young Adult Perspectives on Generative AI
Go deeper: Read the full report (with full methodology)
3. Training data
- Nvidia unveiled a new AI chip architecture called "Rubin" and announced plans to upgrade its industry-leading AI chips annually. It also showed off a demo of a new game-assistant chatbot. (CNBC, The Verge)
- Donald Trump joined TikTok, which he tried to ban in 2020. (Axios)
4. + This
For Melinda French Gates, that one shot when the baby looks away was the perfect photo to share on Instagram — if, that is, she wanted to protect a famous grandchild from a future of online facial recognition or being used as training data for Meta's AI.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+



