Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Stay on top of the latest market trends
Subscribe to Axios Markets for the latest market trends and economic insights. Sign up for free.
Sports news worthy of your time
Binge on the stats and stories that drive the sports world with Axios Sports. Sign up for free.
Tech news worthy of your time
Get our smart take on technology from the Valley and D.C. with Axios Login. Sign up for free.
Get the inside stories
Get an insider's guide to the new White House with Axios Sneak Peek. Sign up for free.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Want a daily digest of the top Denver news?
Get a daily digest of the most important stories affecting your hometown with Axios Denver
Want a daily digest of the top Des Moines news?
Get a daily digest of the most important stories affecting your hometown with Axios Des Moines
Want a daily digest of the top Twin Cities news?
Get a daily digest of the most important stories affecting your hometown with Axios Twin Cities
Want a daily digest of the top Tampa Bay news?
Get a daily digest of the most important stories affecting your hometown with Axios Tampa Bay
Want a daily digest of the top Charlotte news?
Get a daily digest of the most important stories affecting your hometown with Axios Charlotte
Swayne B. Hall / AP
The Information Technology Industry Council — a DC-based group representing the likes of IBM, Microsoft, Google, Amazon, Facebook and Apple— is today releasing principles for developing ethical artificial intelligence systems.
Why it matters: The tech industry is trying to get ahead of growing anxieties about the societal impact of AI technologies, and this is an acknowledgement on companies' part that their data-hungry products are causing sweeping changes in the way we work and live. The companies hope that pledging to handle this power responsibly will win points with critics in Washington, and that showing they can police themselves will help stave off government regulation on this front.
Why now: ITI President Dean Garfield said the industry has learned painful lessons by staying on the sidelines of past debates about technology-driven societal shifts. That's something the industry wants to avoid this time. "Sometimes our instinct is to just put our heads down and do our work, to develop, design and innovate," he told Axios. "But there's a recognition that our ability to innovate is going to be affected by how society perceives it."
The principles include:
- Ensure the responsible design and deployment of AI systems, including taking "steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design."
- Promote the responsible use of data and test for potentially harmful bias in the deployment of AI systems.
- Commit to mitigating bias, inequity and other potential harms in automated decision-making systems.
- Commit to finding a "reasonable accountability framework" to address concerns about liability issues created when autonomous decision-making replaces decisions made by humans.
Other efforts: Last week, Intel laid out its own public policy principles for artificial intelligence, including setting aside R&D funds for testing the technologies and creating new human employment opportunities as AI changes the way people work. The biggest tech companies (as well as smaller AI firms) started the Partnership on AI, a non-profit aimed at developing industry best practices.