Axios AI+

A floating, translucent blue 3D render of the human brain.

February 07, 2024

Ina here, still watching the Tracy Chapman/Luke Combs performance from the Grammys and fondly remembering being right at the stage to see Chapman perform at the Fillmore.

Today's AI+ is 1,092 words, a 4-minute read (three more minutes if you also watch another epic Chapman duet, but so worth it).

1 big thing: Nobody knows how to audit AI

Illustration of a repeating pattern of robot hands holding magnifying glasses.

Illustration: Aïda Amer/Axios

Some legislators and experts are pushing independent auditing of AI systems to minimize risks and build trust, Ryan reports.

Why it matters: Consumers don't trust big tech to self-regulate and government standards may come slowly or never.

The big picture: Failure to manage risk and articulate values early in the development of an AI system can lead to problems ranging from biased outcomes from unrepresentative data to lawsuits alleging stolen intellectual property.

Driving the news: Sen. John Hickenlooper (D-Colo.) announced in a speech on Monday that he will push for the auditing of AI systems, because AI models are using our data "in ways we never imagined and certainly never consented to."

  • "We need qualified third parties to effectively audit generative AI systems," Hickenlooper said, "We cannot rely on self-reporting alone. We should trust but verify" claims of compliance with federal laws and regulations, he said.

Catch up quick: The National Institute of Standards and Technology (NIST) developed an AI Risk Management Framework to help organizations think about and measure AI risks, but it does not certify or validate AI products.

  • President Biden's executive order on AI mandated that NIST expand its support for generative AI developers and "create guidance and benchmarks for evaluating and auditing AI capabilities," especially in risky areas such as cybersecurity and bioweapons.

What's happening: A growing range of companies provide services that evaluate whether AI models are complying with local regulations or promises made by their developers — but some AI companies remain committed to their own internal risk research and processes.

  • NIST is only the "tip of the spear" in AI safety, Hickenlooper said. He now wants to establish criteria and a path to certification for third-party auditors.

The "Big Four" accounting firms — Deloitte, EY, KPMG and PwC — sense business opportunities in applying audit methodologies to AI systems, Nicola Morini Bianzino, EY's global chief technology officer, tells Axios.

  • Morini Bianzino cautions that AI audits might "look more like risk management for a financial institution, as opposed to audit as a certifying mark. Because, honestly, I don't know technically how we would do that."
  • Laura Newinski, KPMG's COO, tells Axios the firm is developing AI auditing services and "attestation about whether data sets are accurate and follow certain standards."

Established players such as IBM and startups such as Credo provide AI governance dashboards that tell clients in real time where AI models could be causing problems — around data privacy, for example.

  • Anthropic believes NIST should focus on "building a robust and standardized benchmark for generative AI systems" that all private AI companies can adhere to.

Market leader OpenAI announced in October that it's creating a "risk-informed development policy" and has invited experts to apply to join its OpenAI Red Teaming Network.

Yes, but: An AI audit industry without clear standards could be a recipe for confusion, both for corporate customers and consumers using AI.

  • "A clear baseline for AI auditing standards can prevent a race-to-the-bottom scenario, where companies just hire the cheapest third-party auditors to check off requirements," Hickenlooper believes.

2. Homeland Security wants you

DHS secretary Alejandro Mayorkas, speaking at the Axios AI+ Summit in 2023

DHS secretary Alejandro Mayorkas speaking at the Axios AI+ Summit in 2023. Photo: Axios

On the same day that House Republicans failed to impeach Alejandro Mayorkas, the Homeland Security Secretary was in Silicon Valley trying to recruit AI talent to his agency.

Why it matters: With AI expertise in short supply, the agency is looking to recruit at least 50 experts in the field this year as part of a new "AI Corps" modeled on the U.S. Digital Service.

Details: The Department of Homeland Security held a recruiting event in Mountain View, Calif. on Tuesday, as it looks to take advantage of more flexible federal hiring practices put in place for AI-related jobs.

  • Speaking in front of a small crowd, Mayorkas reiterated the pitch he made at the Axios AI+ Summit — contending that his agency has a unique vantage point on balancing the benefits of AI along with the potential risks to privacy, while also combatting malicious use of the technology.
  • "We want to lead the federal government in harnessing AI to advance our mission," Mayorkas said at the event.
  • At the same time, Mayorkas said it is critical that the agency take its responsibilities seriously, especially around privacy and civil liberties. "It is incredibly important that we build confidence in how we are using AI," Mayorkas said. "Trust is earned."

Of note: One of the top questions among attendees was about the ability to work remotely. DHS officials stressed that they are "incredibly" open to that, understanding that not everyone with AI skills lives in Baltimore or Virginia.

The big picture: Mayorkas said the government can't afford not to embrace the benefits of AI.

  • "The potential of AI is extraordinary so we need to tap it," Mayorkas said. "There is an underlying impatience on my part to demonstrate [that the] government can do everything the private sector can."

Yes, but: Mayorkas and DHS CIO Eric Hysen acknowledged the federal government remains a bureaucracy, some progress notwithstanding.

  • "I saw more paper in my first few weeks than I had seen in my entire career to date," Hysen said of his arrival in the federal government.

Between the lines: Mayorkas, who has been on the receiving end of D.C.'s polarized politics, noted at the Mountain View event that there is a truly breathtaking amount of polarization around AI as well.

  • "I speak to somebody and they say, 'You know what, AI is going to cure cancer on Wednesday.' Then I will talk to somebody and they'll say, 'It really doesn't matter — AI is going to end civilization on Tuesday.' "

3. Training data

  • Apple won't face a lawsuit from AliveCore over the heart-monitoring tech in the Apple Watch, according to a judge in Oakland. AliveCor says it hasn't stopped litigating its other other patent claims against the company. (Bloomberg)
  • Google says small commercial spyware companies are a big problem. (Axios)
  • You no longer need an invite to join BlueSky, the trendy X alternative. (The Washington Post)
  • Stop the social media notifications that you never signed up for. (Axios)
  • WeWork's last, best hope might be a purchase by Adam Neumann, the founder and CEO who was ousted in 2019. (Axios)
  • OpenAI will add digital watermarks to images created on the ChatGPT website and the API for the DALL-E 3 model. The company says it won't affect latency or image quality. (The Verge)
  • Trading places: Elizabeth Kelly is the new director of the U.S. AI Safety Institute at the Commerce Department's National Institute for Standards and Technology. Kelly has been serving as a special assistant to the President for economic policy.

4. + This

The National Highway Traffic Safety Administration asks that you please not wear your Vision Pro goggles while driving your Tesla.

Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter.