
Photo illustration: Jonathan Raa/NurPhoto via Getty Images
AI companies and the tech industry worldwide are working to understand the European Union's finalized AI Act, the first legal framework for artificial intelligence to emerge globally.
Why it matters: How the EU regulates AI will have ripple effects on the rest of the world, dictating company behavior and setting benchmarks for the technology.
Driving the news: EU policymakers reached a "political agreement" on the AI Act over the weekend.
- Once again, other countries will be trailing the EU, with lawmakers determined to make their own legislation more flexible and welcomed by industry while ensuring safety.
- The U.S. and UK have been watching Europe closely.
Background: The agreement finalized Friday after a 36-hour negotiating marathon won't come into force until 2025, as Axios' Ryan Heath reported. Some key details:
- The EU law bans several uses of AI, including bulk scraping of facial images and most emotion recognition systems in workplace and educational settings, with safety exceptions. It also bans controversial "social scoring" systems.
- Foundation model providers will need to submit detailed summaries of the training data they used to build their models.
- Companies violating the rules could face fines ranging from 1.5% to 7% of global sales.
- Operators of systems creating manipulated media will have to disclose that to users.
- Providers of other "high risk" AI, especially in essential public services, will be subject to reporting requirements, including disclosure to public databases and human rights impact assessments.
What they're saying: Industry groups are wary, despite not yet seeing the full final text.
- "We … remain concerned that a two-tier approach to foundation models will instill significant legal uncertainty in the market," Marco Leto Barone, senior manager for Europe policy at the Information Technology Industry Council, said in a release.
Others said negotiations moved too quickly: "Regrettably speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy," Daniel Friedlaender of the Computer and Communications Industry Association Europe said in a release.
- "The negative impact could be felt far beyond the AI sector alone."
- CCIA European policy manager Boniface de Champris said the act might "even end up chasing away the European champions that the EU so desperately wants to empower."
The other side: Technical details of the deal are still being worked out, but a "political agreement" is in place with formal passage expected by the spring, per an EU official, who said the act puts up guardrails where necessary but stimulates and reinforces use and development of AI in Europe.
- The EU official said to expect more details on how general purpose AI models can be used in the final text, and that a long lead time between passage and enforcement allows for member states and the technology industry to prepare.
State of play: Iverna McGowan, director of the Center for Democracy and Technology's Europe office, told Axios that so far, it appears the agreement bans the use of "emotional recognition" for workplaces and educational settings, but not for migration and immigration.
- "We know that there's not an absolute ban on biometric surveillance as had been proposed by the European Parliament position," McGowan said.
- But any exceptions "will be really important to examine."
- The EU official said individual member states had strong requirements for security and military uses of AI, which led to the agreement's provisions on biometric surveillance, allowing law enforcement use of biometric identification systems in certain circumstances.
What's next: The EU Parliament and Council have to formally adopt the final agreement before it becomes law.
- Further down the line, the EU is considering how liability will be determined for AI products and services with new directives being considered, as Axios previously reported.
