Axios AI+

October 07, 2024
Sorry, it's too hot in San Francisco to write an intro.
Today's AI+ is 1,100 words, a 4-minute read.
1 big thing: AI is only one piece of the power puzzle
AI's huge power appetite is well known, and data center growth is driving up power needs — but nailing down how much AI is driving the demand for new data centers turns out to be tricky.
Why it matters: It's a key question as policymakers and other stakeholders weigh AI's benefits against its carbon footprint.
The big picture: Data centers are one reason U.S. electricity use is rising after about 15 static years.
- The International Energy Agency projects data centers will be 6% of U.S. power demand by 2026, up from 4% in 2022.
- Further out, Barclays researchers see data centers accounting for more than 9% of demand in 2030, up from 3.5% today. McKinsey analysts estimate it even higher at 11%-12% in 2030.
State of play: Right now, a constellation of other needs accounts for much more than AI in data center power use.
- Think streaming services, storage and databases, payment processing and various business management systems, to name a few.
- Rhodium Group director Jeffery Jones estimates AI is around 5%-10% of U.S. data center power use today.
What's next: Energy use for training and using large language models like ChatGPT is growing fast from a small base — and spawning huge new data centers.
- "We're two steps into an ultramarathon, and I mean a really long ultramarathon," Jones tells Axios, noting that generative AI only arrived in earnest about two years ago.
Zoom in: Jones estimates that in 2025, "legacy" uses will account for 80% of U.S. power demand from data centers, while AI will be 20%. By 2035, he projects, it's 50-50.
- Absent generative AI, he said there would still be growth in traditional data centers, but it would be fairly steady at 1.5% to 2% annually.
The intrigue: For now, there's a range of views on this question in wonk-world.
- Electricity analyst Rob Gramlich estimates half the growth in data center energy suck will be from AI in the next three to five years.
- Goldman Sachs research shows that soaring growth from a small base could still leave non-AI uses playing the largest role in the medium term.
- It sees non-AI data center energy use rising from 142 terawatt hours in 2023 to 304 in 2030, and AI-related uses jumping from 4 TWh last year to 93 TWh in 2030.
Reality check: It's hard to predict how computing efficiency gains will stack up against rising AI use.
- And there's no bright line between what's AI versus other advanced forms of machine learning. Complicating things further, traditional data centers handle some generative AI use.
- This is also a local story. Clusters of huge data centers for AI can bring big localized challenges for utilities and grid regulators.
The bottom line: AI is turbo-charging the demand for power, but it's not the only factor.
2. AI is outrunning labeling, detection tools
Humans are not great at detecting AI-generated text, images and video right now, and as AI models improve, detection will only get harder.
Why it matters: If we can't separate AI-made content from human-created material, governments and businesses will find themselves increasingly vulnerable to new kinds of attacks — both narrowly targeted operations and broader misinformation offensives.
- Bad actors succeed when the public believes it's impossible to know what's true and what isn't.
What they're saying: "We are now well past the point where humans can reliably distinguish between synthetically generated text, audio and images," Syracuse University research professor Jason Davis tells Axios.
- Davis focuses on detecting misinformation in his role leading the Semantic Forensics program, funded by DARPA.
- "While the capabilities in synthetic video generation are still a few steps behind these other media types, it is moving very quickly and I expect these capabilities to be on par with other media types in a matter of months, rather than years," he says.
- Even when we are able to detect AI-generated content, Davis says, we do it "after the fact in reactive mode" when the damage is already done.
Case in point: Commercially available tools to detect AI-generated content don't work very well, despite what their makers claim.
- Watermarking technology favored by big tech companies is particularly unsuitable for preventing the spread of misinformation.
- No one has figured out how to create watermarks that can't be circumvented by determined bad actors.
- For watermarks to succeed, everyone creating generative AI tools must agree to implement them.
- Criminals can also use off-the-shelf generative AI tools that don't produce watermarks.
Zoom in: Instead of focusing on detection tools, tech companies have focused on labeling AI content — but labeling is fundamentally flawed, since so much content today is already a collaboration between human and machine.
- In July, Meta changed the wording on its labeling from "Made with AI" to "AI info" after complaints that its algorithm labeled content as AI-generated when photographers had just made minor edits with retouching tools.
- "Like others across the industry, we've found that our labels based on these indicators weren't always aligned with people's expectations and didn't always provide enough context," Meta announced in an update to a blog post.
Zoom out: Adobe focuses its labeling efforts on provenance, which means making it clear where a piece of content was generated and what happened to it along the way.
- In 2019, the company launched the Content Authenticity Initiative, which focuses on free provenance tools like Content Credentials.
- Adobe calls Content Credentials a "nutrition label" for digital content that includes the date and time the content was created and edited and signals "whether and how AI may have been used."
- "Content Credentials aren't meant to be a detection tool for whether an image is fake or not," Adobe spokesperson Andrew Cha tells Axios. "It is used as a tool to surface provenance information to consumers so that they can make an informed decision about the trustworthiness of the content."
Between the lines: Lawmakers have introduced policies around the use and disclosure of synthetic material in political ads.
- For disclosure rules to work, they must "have teeth," Subramaniam Vincent, director of journalism and media ethics at Santa Clara University's Markkula Center for Applied Ethics, tells Axios. And "they need real and timely enforcement."
3. Training data
- Apple Intelligence will arrive on the iPhone on Oct. 28, along with iOS 18.1, Mark Gurman reports. (Bloomberg)
- Disaster deepfakes are hindering rescue and recovery efforts and spreading false political narratives about Hurricane Helene. (Axios)
- Venture capitalist Ben Horowitz, who along with his partner Marc Andreessen endorsed Trump in July, will make a "significant" personal donation to the Kamala Harris campaign, Dan Primack scooped. (Axios)
4. + This
For those who want to look busy without actually being busy, Amazon sells this device.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+





