Axios AI+

July 05, 2024
Whether you're working today or enjoying an extended holiday weekend, we hope you had a festive — and safe — Fourth of July.
Today's AI+ is 825 words, a 3-minute read.
1 big thing: Fear of AI has a cost
Miriam Vogel — president and CEO of EqualAI — believes that the only thing we have to fear about AI is the fear of AI itself.
Why it matters: Despite enthusiasm among investors, tech firms and some early adopters, the vast majority of people are still afraid to use AI, Vogel tells Axios in our latest Human Intelligence interview.
- That means the technology could end up "only benefiting a small, homogenous group."
Catch up quick: Under President Obama, Vogel served as senior policy adviser in the White House and associate deputy attorney general at the Justice Department.
- She's now the chair of the National AI Advisory Committee in addition to her role at EqualAI, an organization dedicated to responsible and inclusive development of artificial intelligence.
The big picture: Vogel says EqualAI has always been "AI net positive." When EqualAI launched six years ago, the mission was to warn AI enthusiasts of the technology's potential liabilities and hazards.
- Now she says she's more focused on stemming growing fears around generative AI, while still helping people understand the risks.
Between the lines: When she talks to average people about AI, Vogel says discussions are often permeated with fear:
- "I'm going to be replaced."
- "I won't understand it."
- "My kids won't have a place in this world."
- "Humans will not survive."
The worst thing about these fears, Vogel says, is that they stop people from engaging with or even using AI.
- Vogel says that when she asks friends and family if they've tried ChatGPT, most of them say no — and if they have tried it, most of them have only used it once.
- The more people avoid using AI, the more likely these fears could turn into self-fulfilling prophecies.
- Vogel says her goal and the goal of EqualAI is to "get more people to engage with eyes open" to make sure AI's benefits reach a wide population.
Case in point: AI literacy programs are essential for getting more people to engage with the technology, Vogel says.
- "It's too often that I'll ask an audience where they're using AI today and they have no idea that they've used it 15 times before they walked in the room."
- AI literacy means "critical thinking," she says: "It's not just computer science. It's understanding false images and misinformation as well."
- Such programs are tough for the federal government to promote since states control American education, schools aren't versed in the subject and the information is all coming from the tech industry, Vogel says — but Washington can establish standards to make sure "our kids are AI-ready," and not doing so is "malpractice."
Yes, but: Many people still don't regularly use generative AI because they've yet to find a good reason to do so.
- Meanwhile, the companies building AI are regularly making mistakes or shipping faulty products that give credence to people's fears.
The bottom line: While fears of being replaced or left behind are real, Vogel insists that we're not powerless against them.
- "We have this moment to make sure that more communities are participating and benefiting from AI — and if we do, our AI will be better."
- "We've been through inflection points before," she says, and each time, "humans have prevailed."
2. Google finds AI is speeding its carbon emissions
Google is bullish about AI's power to fight global warming, but it's also candid about the energy-thirsty tech driving up the company's own emissions for now.
Why it matters: This climate yin-yang is on display in the tech giant's latest environmental report.
- Google's corporate emissions rose another 13% last year and are up 48% compared to their 2019 baseline.
- That's partly because data centers serving AI and other applications are using more power.
State of play: Last year's CO2 growth reflects the "challenge of reducing emissions while compute intensity increases and we grow our technical infrastructure investment to support this AI transition," the report states.
- But it also highlights ways the company's moving to make AI infrastructure far more efficient.
- And outside its own operations, the report touts Google's AI products that cut emissions, such as tools that cities use to improve traffic.
What's next: Google faces a tough climb to reach its 2030 net-zero goal.
3. Training data
- Hackers breached OpenAI's internal messaging systems last year, per sources. The company chose to tell employees and the board, but not the public. (The New York Times)
- Judy Garland, James Dean and other deceased actors will voice audiobooks via the AI-generated voice-overs from ElevenLabs and an agreement made with their estates. (CNN)
- Meta's Threads is a year old and boasts more than 175 million monthly active users, but still lacks the scale and influence of X, formerly Twitter. (Axios)
- Cities, states and school districts are passing sweeping bans on cellphones in schools, aiming to get kids to pay attention during class and socialize with their peers. (Axios)
4. + This
The AI smart home device in the trailer for the upcoming thriller "Afraid" is giving some real "I'm sorry, Dave, I can't do that" energy.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+





