Oct 26, 2023 - Technology

The fight against deepfakes expands to hardware

A photo of the rear of a Leica M11-P camera showing an image of the New York skyline, with the photo's provenance embedded in the metadata

Image: Leica

As AI photo editing apps become more accessible and pervasive, software and hardware makers are building tools to help consumers verify the authenticity of an image starting from the moment of capture.

Driving the news: Leica announced Wednesday that its new M 11-P camera will be the first with the ability to apply Content Credentials from the moment an image is captured.

Why it matters: Adobe, Microsoft and others are adding metadata called Content Credentials to note when AI has been used to create or alter an image. But extending content verification all the way to the camera is seen as a critical step in the battle against deepfakes.

  • Qualcomm said Tuesday that its latest high-end smartphone chip, the Snapdragon 8 Gen 3, has built-in support for similar labeling of images — both those captured in the camera and those generated through AI — using technology from Truepic.
  • Google announced the availability of its "about this photo" feature on Wednesday that offers information on how a photo was captured and altered, as well as when the image first appeared in Google's search engine. That can be helpful in breaking news situations, where old images are often recirculated.

Yes, but: Leica is a high-end camera maker and the M 11-P costs nearly $10,000. Most user-generated images come from smartphones.

  • The announcement by Qualcomm and TruePic could affect far more people, but the approach they have taken requires consumers, phone makers and app developers all agreeing to using the image-verification capability.
  • Ideally, app makers would build content authentication into all iOS and Android camera apps, something Qualcomm senior vice president Alex Katouzian told Axios he believes will happen in the coming years. "They're going to do the right thing, I believe," he said.

Be smart: This isn't a perfect solution. The key, Adobe general counsel and chief trust officer Dana Rao said, is getting enough images authenticated that people are suspicious when they see a photo without authentication.

  • While Caanon, Nikon and Sony have not yet incorporated the technology into their cameras, Rao noted that they are all members of the Content Authenticity Initiative, which backs the Content Credential standard being used by Leica.
  • Rao also noted that momentum for the initiative has picked up with the White House calling for the labeling of AI-generated images and videos.

Between the lines: Content authentication becomes more necessary as AI-powered photo editing tools are more accessible and more pervasive. A signature feature of Google's new Pixel smartphones, for example, is a "magic editor" that lets you easily move people and objects in a photo. Adobe and others are touting similar capabilities.

What they're saying: "We believe deploying the provenance open standard on-device is one of the most significant breakthroughs toward a more authentic internet and will be the model moving forward," Truepic CEO Jeff McGregor said in a statement.

Zoom in: Widespread use of such authentication tools would be useful, for example, in the current Israel-Gaza conflict.

  • "The problem isn't that deepfakes are everywhere," Rao said. "It's that doubt is everywhere. People no longer know what to believe."

Disclosure: Some reporting for this article took place at Qualcomm's Snapdragon Summit in Maui, where I am moderating an AI-related panel on Thursday. Qualcomm paid for my travel-related costs.

Go deeper