
Photo illustration: Sarah Grillo/Axios. Photo: Cody Glenn/Sportsfile for Web Summit via Getty Images
Microsoft President Brad Smith said Thursday that the latest AI technologies require guardrails that can't be established by tech companies alone.
- In a blog post, Smith called for greater dialogue with governments and other stakeholders, but stopped short of calling for specific regulation.
Why it matters: Few laws today govern how businesses or governments can use AI technologies, though lawmakers in Europe have begun discussions on a wide-ranging AI Act.
Microsoft, like other tech companies, has its own internal process for vetting the ethics of various AI projects, but Smith said "our own efforts and those of other like-minded organizations won’t be enough."
- "This transformative moment for AI calls for a wider lens on the impacts of the technology — both positive and negative — and a much broader dialog among stakeholders," Smith said.
The big picture: Smith called for attention to three specific areas:
- The need for responsible and ethical AI.
- AI's impact on national security and economic competitiveness.
- Ensuring that AI technology serves society broadly, not narrowly.
Yes, but: Although Smith argued that "these issues are too important to be left to technologists alone," he said it would be equally wrong to exclude the companies pioneering such technologies from the regulatory process. "There’s no way to anticipate, much less address, these advances without involving tech companies in the process."