October 10, 2023
It's Tuesday, Pro readers.
- We'll be back in your inbox when the House speaker drama is over or when there's other breaking news you need to know.
💭 Reminder: We want your Pro Policy feedback! Complete our 5-minute survey for a chance to win a $100 gift card and let us know what enhancements you'd like.
1 big thing: Next up for AI in the EU: Liability
Illustration: Tiffany Herring/Axios
While policymakers in the United States work on the beginning stages of AI regulation, Europe is already thinking about how companies will be held liable for harms, Ashley writes in her column today.
Driving the news: As the EU AI Act nears completion, the bloc is considering updated and new directives that would significantly impact how AI companies operate and are held financially accountable for their services and products, along with how they insure their products.
Why it matters: The U.S. hardly has a handle on how AI will be controlled in the future as global companies prepare for a possible new liability regime beyond the EU AI Act and increased insurance costs in Europe.
- It's yet another example of the EU pushing forward on tech policy, setting rules and norms as others slowly catch up.
- The U.S., UK and EU all want to be aligned on priorities for governing AI, and there's more cohesion than in previous tech policy debates, like around the GDPR. But the EU is still inarguably leading.
Details: The EU is considering two directives to supplement the AI Act and create a unified approach to regulating how people can get redress from companies for perceived harm by AI.
- The AI Product Liability Directive would amend an existing set of EU rules around consumer protection, updating the current liability framework so it's easier to bring claims against AI and software companies for digital harms.
- On Monday, committees in the EU Parliament adopted a position of revisions on that directive, and it will now be negotiated with the EU Council. Negotiations will likely close next February.
- The AI Liability Directive aims to make it easier to bring claims of harm against AI systems and use of AI, enabling courts to compel AI providers for evidence.
What they're saying: "It's extremely complex, and I think even policymakers struggle to make the case for it," Mathilde Adjutor, senior policy manager for CCIA Europe, told Ashley.
- "We're seeing this big extension of scope that can have economic impact through insurability costs of tech and software companies."
- CCIA and ITI, both groups whose membership is tech companies worldwide, are involved in discussions around the two directives and pushing for fewer burdens and costs on companies.
- "The combined application of this new liability regime with the proposed AI Liability Directive, and also the EU AI Act, may affect the competitiveness of the European innovation ecosystem," Guido Lobrano, ITI's director general for Europe, said in a statement.
Of note: The product liability directive, Adjutor said, is a "safety net for liability and for consumers to get compensation from companies" when a product is damaged but without the need to prove fault.
The big picture: "The idea at very high level is to make it a little easier for consumers to pursue claims that they have been harmed by an AI system and change the standards and burdens," Vivek Mohan, co-chair of the Gibson Dunn law firm's AI practice, told Ashley.
- "The AI Act has gotten a lot more attention, because as a regulation, it'd be self-executing, like GDPR, with penalties and fines imposed by regulators."
- But the new liability regimes are focused on consumer law and are "directives" rather than regulations, so each member state of the EU may handle them differently.
Meanwhile, in the U.S., conversations are only just beginning around who is liable when an AI system discriminates or harms people.
- Some lawmakers have proposed entire new agencies to regulate AI, while others maintain current rules can be applied to AI, whether it's being used in health care, banking, housing or other sectors.
- Legal experts are also not clear on whether Section 230 of the Communications Decency Act will protect online platforms from liability regarding false information produced by AI products.
What we're watching: Private litigation against tech companies is becoming more prevalent in the U.S., though it's still tough for plaintiffs to win cases without updated laws for the digital age.
- Europe is likely to set an early example for civil complaints against AI harms and how they are settled.
2. What we're hearing: Net neutrality
Illustration: Gabriella Turrisi/Axios
With FCC Chair Jessica Rosenworcel reviving plans to restore rules aimed at ensuring internet companies treat all traffic equally, Maria's been keeping tabs on what key players in the debate are saying.
"Title II regulation has really nothing at all to do with net neutrality. We have a free and open internet today. We're going to vote on the proposal next week. My guess is we'll go to a final vote over at the agency sometime in the late spring, we'll be off to the races in the courts and ultimately this will get overturned, so we're just wasting time and there's other stuff we should be doing."— FCC Commissioner Brendan Carr, at a Semafor event today
"There are provisions in the Infrastructure Act to ensure that there is the right kind of behavior. Let's execute well around that law and those structures that are in place. Let's not put our time and attention into addressing something that isn't a problem."— AT&T CEO John Stankey at the same event
"Actually, things haven't been fine.... Many disadvantaged communities, including those in rural areas and communities of color, have consistently been excluded from broadband expansion efforts. Some ISPs have even resorted to requiring customers to watch advertisements before granting access to the internet."— Fight for the Future Director Evan Greer, in an op-ed Friday
3. Catch me up: California, China and more
Illustration: Shoshana Gordon/Axios
Here's what caught our eye over the long weekend.
🌎 G7 on AI: Leaders of the G7 are aiming to draw up a voluntary international code of conduct for AI as early as this autumn, Japanese Prime Minister Fumio Kishida said Monday, per the Yomiuri Shimbun.
💻 California law: Gov. Gavin Newsom on Sunday signed into law a measure that would hold social media companies liable for failing to combat the spread of child sexual abuse materials, the Los Angeles Times reported.
- The law takes effect Jan. 1, 2025.
🇨🇳 China CODEL: Senate Majority Leader Chuck Schumer said there had been "serious engagement" during a meeting between Chinese President Xi Jinping and his bipartisan CODEL in Beijing, Reuters reported today.
📱 Disinfo: Over the weekend, disinformation about Israel and Gaza flooded social media sites, NPR reports.
📡 Digital discrimination: Last week, NTIA in a filing urged the FCC to adopt strong rules against digital discrimination by taking into account intentional discrimination by an internet provider, as well as the actual effects on communities from a company's practices.
✅ Thank you for reading Axios Pro Policy, and thanks to editor Mackenzie Weinger and copy editor Brad Bonhall.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive


