Dec 11, 2023 - Technology

There's a big catch in the EU's landmark new AI law

Illustration of the EU flag with downward facing cursor arrows in place of stars

Illustration: Annelise Capossela/Axios

The European Union's comprehensive AI regulations, finalized Friday after a 36-hour negotiating marathon, come with a catch: The EU is stuck in a legal void until 2025, when the rules come into force.

Why it matters: As the first global power to pass comprehensive AI legislation, the EU is once again setting what could become worldwide regulatory standards — much as it did on digital privacy rules — but the transition could be bumpy.

Because the law will not be in force until 2025, the EU will urge companies to begin voluntarily following the rules in the interim. But there are no penalties if they don't.

  • The hiatus leaves plenty of room for the U.S. or others to undercut the EU's plans before they go into effect by, for instance, implementing less restrictive rules before Europe's kick in.
  • Senate Majority Leader Chuck Schumer has expressed concern that EU-style laws enacted by the U.S. would put American firms at a disadvantage competing with China.

The big picture: European policymakers began work on their AI Act before ChatGPT's Nov. 2022 arrival and the explosion in the generative AI market during 2023.

  • The EU approach categorizes AI uses according to four risk levels, with increasingly stringent restrictions matched to greater potential risks.

Details: The EU law bans several uses of AI, including bulk scraping of facial images and most emotion recognition systems in workplace and educational settings. There are safety exceptions — such as using AI to detect a driver falling asleep.

  • The new law also bans controversial "social scoring" systems — efforts to evaluate the compliance or trustworthiness of citizens.
  • It restricts facial recognition technology in law enforcement to a handful of acceptable uses, including identification of victims of terrorism, human trafficking and kidnapping.
  • Foundation model providers will need to submit detailed summaries of the training data they used to build their models.
  • Companies violating the rules could face fines ranging from 1.5% to 7% of global sales.
  • Operators of systems creating manipulated media will have to disclose that to users.
  • Providers of other "high-risk" AI, especially in essential public services, will be subject to reporting requirements, including disclosure to public databases and human rights impact assessments.
  • AI uses covered by those requirements include education, employment, elections, critical infrastructure, and border control.
  • EU national governments, with France leading the charge, demanded and won exemptions from some aspects of the law for military or defense uses of AI.

Context: The main dividing line in negotiations was between national governments demanding national security exemptions and parliamentarians defending civil liberties.

Meanwhile: The EU is taking a softer approach than the U.S. or U.K. in assessing whether Microsoft's relationship with OpenAI violates antitrust rules.

  • A European Commission spokesperson tells Axios that "the Commission has been following very closely the situation of control over OpenAI," but said it would take a "change of control on a lasting basis" to justify a Commission investigation.

What they're saying: EU officials are in a self-congratulatory mode, framing the law as "a launchpad for EU startups and researchers," per European industry commissioner Thierry Breton.

  • Amnesty International accused the EU of authorizing "dystopian digital surveillance."

Go deeper: Next up for AI in the EU: Liability

Go deeper