
Illustration: Rebecca Zisser/Axios
The EU has published a set of ethical guidelines for "trustworthy AI" — a long wishlist of idealistic principles, many still technically out of reach, meant to keep unwanted harms from the powerful technology at bay.
Why it matters: It's an early, earnest attempt to get countries to buy into general ethical principles. But without an enforcement tool, it is unlikely to result in safe AI.
The big picture: Up to now, AI development has largely been a free-for-all.
- The big players have taken off on their own, ending up in familiar places: The U.S., favoring a hands-off approach, has left responsibility with Big Tech; Beijing imposes its own norms on Chinese companies; Europe has hewn a middle path.
- But there's a growing realization that current unsupervised AI development is headed into dangerous territory. This has prodded experts to work toward shared norms for AI.
Why it matters: The driver's seat of global AI policy is still empty, awaiting the country or organization that will set the rules for others to follow.
- Much rides on who takes control. A world with the U.K. at the helm would look very different from one with China chauffeuring.
- "Everyone wants to dictate what's happening," says Amy Webb, an NYU professor and founder of the Future Today Institute.
With today's announcement, the EU has scored the first-mover advantage, says Chris Padilla, IBM's VP for government and regulatory affairs, as it did with GDPR, the landmark privacy bill that has changed the way Big Tech does business in Europe.
- The new guidelines are a far cry from GDPR's strict rules.
- But that doesn't mean they're useless, argues OpenAI policy director Jack Clark, who helped build an equivalent set of recommendations for the OECD. Governments will likely use these guidelines as templates for their own national policies, he tells Axios.
- Before regulations kick in, international standards and industry self-policing are especially important, says Charlotte Stanton, Silicon Valley director for the Carnegie Endowment for International Peace.
What's next: The EU guidelines will soon have stiff competition.
- In the coming months, the OECD will come out with its own recommendations, with many similarities.
- U.S. and European officials are also considering Webb's proposal for an international body to oversee Big Tech companies, she tells Axios. A spokesperson for the White House Office of Science and Technology Policy would not confirm that it's considering the proposal.
- AI pioneer Yoshua Bengio is pushing for binding ethical regulations as part of a new organization for governments, nonprofits, companies and other experts.
In an interview with Nature, Bengio says, "Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t."
- Webb says much the same: "Developers are not incentivized at all to follow these guidelines," she tells Axios, emphasizing that building in ethics would slow companies' AI systems and perhaps make them less capable.
- What's more, the growing buffet of ethics proposals may cause problems, Webb argues. A firm deciding between several international guidelines, its home country's national policy, and recommendations from universities and nonprofits might end up doing nothing.
Go deeper:
- AI's uneasy coming of age (Axios)
- Europe’s silver bullet in global AI battle: Ethics (Politico)
- China wants to shape the global future of artificial intelligence (MIT Tech Review)