Axios AI+

May 10, 2024
Ryan here. I've had an amazing year getting to understand, use and explain AI, and helping to build what I think is the most well-rounded AI newsletter out there. And like many of you, I've caught the AI bug: so I'm heading off to work at an AI startup.
Thank you for reading and for all your feedback and stories. You'll continue to be in good hands here with Ina.
Today's AI+ is 1,148 words, a 4-minute read.
1 big thing: Top cyber official calls AI a threat multiplier
Generative AI is not just teaching cyber bad guys new tricks — it's also making it easier for anyone to become a bad guy, Cybersecurity and Infrastructure Security Agency chief Jen Easterly tells Ina in the latest in our Human Intelligence interview series.
Why it matters: Cybercriminals with AI at their disposal will be able to do more of everything: from phishing and spamming, to acts of blackmail and terrorism, to campaigns of misinformation and election sabotage.
- "I think it'll make people who are less sophisticated actually better at doing some of the bad things that they want to do," Easterly told Axios in an interview on the sidelines of the RSA Conference Tuesday.
- "AI will exacerbate the threats of cyberattacks — more sophisticated spear phishing, voice cloning, deepfakes, foreign malign influence and disinformation," said Easterly.
Context: The fast-moving nature of AI adds fresh layers of risk and uncertainty.
- "I look at AI: how fast it's moving, how unpredictable it is, how powerful it is," Easterly said. "A powerful tool will create a powerful weapon, we can just sort of make that assumption."
Driving the news: This week, CISA unveiled a "secure by design" pledge with dozens of tech companies, including Microsoft, Cisco, IBM, Scale AI and others.
- The pledge incorporates a variety of best practices around boosting the strength of default passwords, adopting multifactor authentication and reducing entire classes of vulnerabilities.
- Although the pledge itself isn't binding, Easterly noted it includes reporting requirements, and just having companies be accountable can be a positive force. "Yes, it's voluntary, but there is virtue in radical transparency."
Catch up quick: Before assuming her role leading CISA in 2021, Easterly served in the military, worked in counterterrorism during the Obama administration, and then was a top cybersecurity executive at Morgan Stanley.
- CISA, part of the Department of Homeland Security, has taken on a broad cybersecurity role assigned by Congress in 2018. Its first director, Christopher Krebs, was fired by former President Trump following Trump's loss in the 2020 election, after Krebs affirmed the election's integrity.
Zoom in: Easterly and her colleagues have spent a lot of time with election officials preparing for various cyber threats, including those fueled by AI.
- For 2024, Easterly said she feels pretty good about the ability of the election apparatus itself to withstand any attacks.
- "Election infrastructure is more secure than ever before," Easterly said. "[AI] won't fundamentally introduce new threats into this election."
- However, Easterly said she is concerned with how generative AI could supercharge existing efforts to sow distrust.
Easterly is concerned that new generative AI tools are joining an already-fraught security landscape.
- For four decades, we've seen what happens when "you have an internet full of malware, software full of vulnerabilities and social media full of disinformation," Easterly said.
Yes, but: AI may also have benefits if properly harnessed by those looking to make systems more secure, including finding vulnerabilities before software is released or identifying new techniques to protect older systems still in use.
- "AI could be powerful to help us deal with legacy technology, which is the scourge of the security community," Easterly said.
- "I genuinely am an optimist," she said. "My journey started when I was a lieutenant colonel in the army in Iraq and we were using technology to be able to to help the troops on the ground to locate bomb makers' technology. So I've seen the power of technology to save lives."
What's next: Just improving existing practices around patching, secure passwords and other security hygiene is probably the best defense against attacks, AI-infused or not, she said.
2. Reddit wants to control use of its public content
Reddit wants anyone looking to use its public data to make a deal with the company.
- "We're going to stay open, but to crawl Reddit or have access to reading content, you need to have some sort of agreement," CEO Steve Huffman told reporters Wednesday afternoon.
Why it matters: Publicly available data is becoming increasingly integral to building certain kinds of new AI businesses, such as ChatGPT or Claude.
Zoom out: Platforms and publishers that host a large amount of that content are racing to protect themselves from having their data siphoned off without adequate compensation, Axios' Sara Fischer has reported.
Zoom in: Reddit published its first-ever Public Content Policy yesterday.
- The intent is to lay out how the company thinks about its user-generated content and outline boundaries of its use by external platforms for AI and other purposes.
- "Reddit believes in an open internet, but not the misuse of public content," the policy states.
What they're saying: Commercial entities should have to pay for data access through "bespoke" arrangements that resemble M&A deals, Huffman said.
- Businesses would also have to agree not to use Reddit data or content to do things like build a Reddit competitor, construct user identities for background checks, archive user content that's been deleted and train AI that is used to generate spam.
- For researchers or platforms like the Internet Archive, data access may be free, but there will be guardrails, Huffman said.
Between the lines: Reddit isn't against having its content used for training AI — but it must be done "on clear terms," according to Huffman.
- "We're only doing agreements with people that we believe will be collaborative partners."
The intrigue: Huffman said he's not yet ready to name some of the bad actors he sees as being unethical with handling data.
- "I look forward to that day. ... And I will happily tell our friends of the FTC who those people are."
Hope's thought bubble: This is as much a message to businesses seeking to rely on Reddit's data as it is for people who are worried about how their posts and information will be used in an age of AI.
What we're watching: Though Reddit would "rather do deals than not," Huffman doesn't expect revenue from commercial agreements to be the company's "largest business model."
- "This doesn't make or break Reddit," said Huffman.
- In its first earnings report as a publicly traded company, Reddit said that revenue from advertising grew 39% year-over-year to $222.7 million and made up about 92% of its overall sales.
- The line item for the commercial data category, currently under "Other," went from close to nothing to $20 million in Q1, Huffman noted.
3. Training data
- Apple apologized for its iPad ad featuring a hydraulic press crushing a piano, analog cameras and other cultural artifacts. (Ad Age, Axios)
- Generative AI voice startup ElevenLabs is teasing a new tool that creates music based on prompts. (VentureBeat)
- Sources say the server chips powering Apple's upcoming AI features will come from inside the company. (Bloomberg)
- TikTok is joining the Coalition for Content Provenance and Authenticity and will automatically label AI content uploaded to the platform. (ABC News)
4. + This
Adore Me lets you use generative AI to create your own lingerie design. Here's what it created with the prompt "hydraulic press, piano, trumpet, iPad."
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+






