Next up for AI in the EU: Liability

- Ashley Gold, author ofAxios Pro: Tech Policy

Illustration: Tiffany Herring/Axios
While policymakers in the United States work on the beginning stages of AI regulation, Europe is already thinking about how companies will be held liable for harms.
Driving the news: As the EU AI Act nears completion, the bloc is considering updated and new directives that would significantly impact how AI companies operate and are held financially accountable for their services and products, along with how they insure their products.
Why it matters: The U.S. hardly has a handle on how AI will be controlled in the future as global companies prepare for a possible new liability regime beyond the EU AI Act and increased insurance costs in Europe.
- It's yet another example of the EU pushing forward on tech policy, setting rules and norms as others slowly catch up.
- The U.S., U.K. and EU all want to be aligned on priorities for governing AI, and there's more cohesion than in previous tech policy debates, like around the GDPR. But the EU is still inarguably leading.
Details: The EU is considering two directives to supplement the AI Act and create a unified approach to regulating how people can get redress from companies for perceived harm by AI.
- The AI Product Liability Directive would amend an existing set of EU rules around consumer protection, updating the current liability framework so it's easier to bring claims against AI and software companies for digital harms.
- On Monday, committees in the EU Parliament adopted a position of revisions on that directive, and it will now be negotiated with the EU Council. Negotiations will likely close next February.
- The AI Liability Directive aims to make it easier to bring claims of harm against AI systems and use of AI, enabling courts to compel AI providers for evidence.
What they're saying: "It's extremely complex, and I think even policymakers struggle to make the case for it," Mathilde Adjutor, senior policy manager for CCIA Europe, told Axios.
- "We're seeing this big extension of scope that can have economic impact through insurability costs of tech and software companies."
- CCIA and ITI, both groups whose membership is tech companies worldwide, are involved in discussions around the two directives and pushing for fewer burdens and costs on companies.
- "The combined application of this new liability regime with the proposed AI Liability Directive, and also the EU AI Act, may affect the competitiveness of the European innovation ecosystem," Guido Lobrano, ITI's director general for Europe, said in a statement.
Of note: The product liability directive, Adjutor said, is a "safety net for liability and for consumers to get compensation from companies" when a product is damaged but without the need to prove fault.
The big picture: "The idea at very high level is to make it a little easier for consumers to pursue claims that they have been harmed by an AI system and change the standards and burdens," Vivek Mohan, co-chair of the Gibson Dunn law firm's AI practice, told Axios.
- "The AI Act has gotten a lot more attention, because as a regulation, it'd be self-executing, like GDPR, with penalties and fines imposed by regulators."
- But the new liability regimes are focused on consumer law and are "directives" rather than regulations, so each member state of the EU may handle them differently.
Meanwhile, in the U.S., conversations are only just beginning around who is liable when an AI system discriminates or harms people.
- Some lawmakers have proposed entire new agencies to regulate AI, while others maintain current rules can be applied to AI, whether it's being used in health care, banking, housing or other sectors.
- Legal experts are also not clear on whether Section 230 of the Communications Decency Act will protect online platforms from liability regarding false information produced by AI products.
What we're watching: Private litigation against tech companies is becoming more prevalent in the U.S., though it's still tough for plaintiffs to win cases without updated laws for the digital age.
- Europe is likely to set an early example for civil complaints against AI harms and how they are settled.