- Kim Hart
- Oct 24
Tech companies pledge to use artificial intelligence responsibly
Swayne B. Hall / AP
The Information Technology Industry Council — a DC-based group representing the likes of IBM, Microsoft, Google, Amazon, Facebook and Apple— is today releasing principles for developing ethical artificial intelligence systems.
Why it matters: The tech industry is trying to get ahead of growing anxieties about the societal impact of AI technologies, and this is an acknowledgement on companies' part that their data-hungry products are causing sweeping changes in the way we work and live. The companies hope that pledging to handle this power responsibly will win points with critics in Washington, and that showing they can police themselves will help stave off government regulation on this front.
Why now: ITI President Dean Garfield said the industry has learned painful lessons by staying on the sidelines of past debates about technology-driven societal shifts. That's something the industry wants to avoid this time. "Sometimes our instinct is to just put our heads down and do our work, to develop, design and innovate," he told Axios. "But there's a recognition that our ability to innovate is going to be affected by how society perceives it."
The principles include:
- Ensure the responsible design and deployment of AI systems, including taking "steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design."
- Promote the responsible use of data and test for potentially harmful bias in the deployment of AI systems.
- Commit to mitigating bias, inequity and other potential harms in automated decision-making systems.
- Commit to finding a "reasonable accountability framework" to address concerns about liability issues created when autonomous decision-making replaces decisions made by humans.