Exclusive: Progressives press Biden to issue an AI executive order
- Maria Curi, author of Axios Pro: Tech Policy

Illustration: Brendan Lynch/Axios
An influential progressive think tank wants the Biden administration to put teeth in its Blueprint for an AI Bill of Rights through enforceable executive action.
Why it matters: The blueprint, released in October 2022, offers a sense of what the country's principles should be when deploying the technology — from ensuring algorithmic discrimination is prevented to protecting data privacy. But the document is non-binding, and AI continues to flourish unregulated.
What's happening: In a report shared with Axios, the Center for American Progress is calling on the White House to issue an executive order and take additional steps, including:
- Create a White House Council on Artificial Intelligence.
- Require all AI tools deployed by federal agencies or contractors to be assessed under NIST’s AI Risk Management Framework.
- Prepare a national plan to address economic impacts from AI, especially job losses.
- Task the White House Competition Council with ensuring fair competition in the AI market.
What they're saying: Alondra Nelson led the development of the Blueprint for an AI Bill of Rights as former acting director of the White House Office of Science and Technology Policy.
- Now, as a distinguished senior fellow at CAP, she's pushing for an executive order.
- "The brisk pace of AI development does not mean that there is nothing that can be done to steer emerging technologies onto a path that benefits society," Nelson wrote in an April 11 blog post.
- "One of the real powers of a new AI executive order is the U.S. government can choose how to implement and live its values in its own usage of AI," said Adam Conner, CAP's vice president for technology policy.
Yes, but: Lasting AI regulations will require congressional action.
- CAP will encourage lawmakers to push the administration to use the authorities they already have, as well as to pursue needed legislation, Conner said.
What we're watching: The American Data Privacy and Protection Act already includes artificial intelligence language and could be a feasible pathway for guardrails.
- The bill offers a preliminary set of rules and the basic structure for a national system of AI guardrails, but lawmakers need to update the provisions to make them more effective and workable, according to BSA The Software Alliance.
- The group, which leads advocacy for the global software industry, has been speaking with the offices of Energy and Commerce leaders Cathy McMorris Rodgers and Frank Pallone for months about making the updates and intends to engage with Senate Majority Leader Chuck Schumer on his efforts, Craig Albright, BSA's vice president for U.S. government relations, told Axios.
- BSA's update proposals include setting clear thresholds for when companies should be required to conduct design evaluations or impact assessments of AI when those systems and tools are used in consequential decisions.
- Also, defining what is a "consequential decision" and creating an enforcement system that requires a company to certify with a designated, existing federal agency that they have met these obligations.
Worthy of your time: Before you go, check out GWU policy fellow Anna Lenhart's effort to round up all the different legislative proposals to address generative AI.
- It suggests Congress is more ready than we give it credit to tackle the technology.