Exclusive: Practical steps for companies to do AI right
EqualAI, a non-profit working with tech companies and the World Economic Forum to highlight and reduce AI harms, has shared exclusively with Axios a collection of the most effective techniques used by participants in its AI governance program, including executives from PepsiCo, Salesforce, Verizon and AWS.
Why it matters: "Responsible AI" has become a go-to slogan for organizations signaling that they're taking AI, and AI safety, seriously. But in the rush to look responsible, and in today's regulatory void, many are confused about what the concept means in practice.
- Uncertainty and delays around AI legislation and litigation have increased the urgency for interim guidance and action on the responsible use of AI.
- "It's not someone else's problem. It's for every company," Miriam Vogel, president and CEO of EqualAI, told Axios. "So many people feel uncomfortable or that they don't belong [in AI debates], but you absolutely must play," she urged.
Details: The most notable suggestions in the papers include:
- Designating "one senior executive who is ultimately responsible for AI governance" who is kept accountable by a committee.
- Involving non-tech employees in the design and implementation of AI features used by an organization — including through bonuses, part of ensuring "human input and oversight into all stages of AI decision-making."
- Seeking out external stakeholders who can provide feedback on how your organization is deploying AI.
- A simple definition of responsible AI as AI which is "safe, inclusive, and effective for all possible end users," and which mitigates the risks of "any unintended use case."
Yes, but: More than 40 other organizations, frameworks and policy papers already occupy this space, ranging from OECD AI principles, to the Partnership on AI and the policies of large tech companies. This paper could complement those approaches — or simply add to the clutter.
What they're saying: "If you wait a year or two, it's too late," Vogel said, noting the rapid advance of generative AI.
- The National Institute of Standards and Technology AI risk management framework "helps a lot of the way on operationalization, but not all of the way," per Vogel.