
Illustration: Sarah Grillo/Axios
While Google's AI ethics outreach efforts are mired in controversy, Microsoft has managed to engender significantly less animosity through a more systematic approach.
Driving the news: Google appointed a controversial outside advisory board, drew an onslaught of protest and disbanded the group a week later, succeeding only in antagonizing people of many different perspectives.
Microsoft's approach: The company began by soliciting a wide range of input, laid out its principles in a book, and is now incorporating those principles into its product development process.
- CEO Satya Nadella penned an op-ed back in 2016 talking about shared responsibilities around AI.
- A few months later, at the company's Build developer conference, he laid out the potential for an Orwellian future if AI isn't handled right.
- That summer, Microsoft created Aether, an internal committee to advise and evaluate on AI ethics issues. The group, which includes more than 100 Microsoft employees, is led at the executive level by president Brad Smith and AI and research head Harry Shum.
- Microsoft has stood fast against internal and external critics and defended its work with the U.S. government, including the military, while still pledging to evaluate each project to make sure it meets the company's ethical standards.
- With some of the thorniest issues, such as facial recognition, the company has also called on legislators to create rules of the road.
- Most recently, Microsoft has moved to make sure ethical considerations are incorporated into product release cycles in the same way that the company added security and privacy reviews in the past.
Google's approach: The company has taken what appears to be a more case-by-case approach despite the fact it, too, has published AI guidelines.
- For example, the company agreed to take part in Project Maven — a facial recognition project for the U.S. military — only to agree to drop the contract amid an employee outcry.
- Similarly, the company appointed an outside advisory committee only to disband it a week later, following protests, in particular over the inclusion of the president of the Heritage Foundation, someone known for views perceived as anti-transgender, anti-gay and anti-immigrant.
- The process of coming up with the committee itself was flawed, some insiders say, with many of the company's own experts not consulted in the group's formation.
- Meanwhile, Google actually has an internal committee to advise on AI issues, but it has kept a far lower profile than Microsoft's Aether. (Bloomberg ran a story reminding people that it exists.)
The bottom line: It's not clear that Google's positions are any more controversial than Microsoft's, but Google's haphazard execution has hampered its AI ethics effort. By stating its principles and sticking to them, even when taking some unpopular stances, Microsoft has displayed more political savvy.