Axios Pro Exclusive Content

FTC consumer chief fires warning on false AI claims

headshot
Jun 10, 2024
FTC's Sam Levine speaking at a Capitol hearing

The FTC's Samuel Levine on Feb. 1, 2022. Photo: Bill O'LearyPOOL/AFP via Getty Images

The FTC is closely watching what companies say they can do with AI and is ready to hold them accountable for phony promises, its top consumer protection official tells Axios.

The big picture: Samuel Levine, director of the agency's Bureau of Consumer Protection, said AI-specific and other companies starting to use AI processes all need to follow the same set of rules. Regulatory action will follow unsubstantiated claims.

  • "If companies are making claims about their use of AI, or the capabilities of their AI, those claims have to be truthful and substantiated, or we are prepared to bring action," Levine told Axios. "Those are not just empty words."

State of play: Something like a new "Bureau of AI" is not necessarily needed, Levine said, because AI and assertions about it are spreading to markets throughout the economy.

  • That gives the FTC a unique window into how AI can reshape markets across the country, from tech to pharmaceuticals to retail, he said.

"We expect companies to be truthful, and we have the track record to show that we're prepared to go to court to hold them accountable," he said.

  • "The law hasn't changed, and obligations of companies using these tools have not changed," because of AI, Levine said, adding that the FTC is especially concerned about AI "turbocharging" fraud.

Friction point: Levine said the FTC Act is a "viable tool" to protect the public from AI harm, allowing the agency to go after false advertising, discrimination and other conduct, but more resources are needed.

  • The FTC needs more authority to refund consumers for fraud, he said, noting the Supreme Court curtailed the agency's ability to obtain monetary remedies in 2021.
  • "Congress giving us back that tool to get money back to people who get scammed, and making sure we have the resources to take on what we fear will be a real scourge of AI-related fraud, is essential to us doing our job," he said.

Driving the news: The FTC recently warned companies over phony promises about AI.

  • Last year, the FTC banned Rite Aid from using AI facial recognition for five years after the agency determined it had been deployed without proper safeguards. The agency also permanently enjoined the firm Automators AI for claiming it could make people quick money with the technology.
  • Last month, the ACLU filed a complaint with the FTC, asking the agency to investigate hiring technology vendor Aon. The group alleges Aon used deceptive marketing when it said its tech was "bias-free" and could "improve diversity."
  • Levine said while he couldn't comment specifically on the ACLU's request, the agency is ready for such cases.

The bottom line: Levine has a message for the private sector: "I think some tech companies might be hoping for a repeat of the FTC's approach to Web 2.0 in the early 2000s, where it said, self-regulation seems to be working."

  • "That's not the position of this FTC. We are not just closing our eyes and hoping self-regulation is going to protect the public," he said.
  • "We're not standing back and saying we're not going to enforce the law because the technology is really new," Levine said. "We think if AI is going to be deployed successfully ... we need to be active."
Go deeper