Tech industry teams up to set AI security standards
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
A group of the top tech companies unveiled a new coalition Thursday that will develop cybersecurity and safety standards for the artificial intelligence tools they're developing.
Why it matters: The promise of the group is that each company will follow the same rigorous security standards for their projects to keep malicious hackers at-bay.
- Companies have historically approached cybersecurity differently in their product development — with some firms pursuing weaker security practices than others.
Driving the news: Google announced the new Coalition For Secure AI during the Aspen Security Forum happening in Colorado this week.
Zoom in: The new coalition will start its work by developing standards for software supply chain security for AI systems, compiling resources to measure the risk of these tools and pulling together a framework to help defenders determine the best use cases for AI in their work.
- Founding members include Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, Nvidia, OpenAI, PayPal and Wiz.
- The coalition is operating under the OASIS Open, an international standards and open source consortium.
What they're saying: "This is the industry coming together and doing this as a coalition — it's not an executive order, it's not a regulation," Heather Adkins, vice president of security engineering at Google, said at an Aspen Security Forum event.
- "I've seen many of these coalitions in the field... it's the reason why we are making such good progress on many other areas of cybersecurity," she added.
Reality check: Many of the participating companies had either already developed their own standards for securing AI or were working on them.
- Last summer, Google released its own framework for securing AI, which encouraged other companies to review the security measures in place for AI models and incorporate automation into its cyber defenses.
The intrigue: This is the first time the industry has come together to work on these AI security issues together.
- The coalition plans to disseminate open-source methodologies, frameworks and tools to help companies that are looking for ways to securely implement AI into their workflows.
What's next: The coalition is actively accepting new members, Adkins said at the Aspen event.
