AI's detection gap opens new vulnerabilities
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
Humans are not great at detecting AI-generated text, images and video right now, and as AI models improve, detection will only get harder.
Why it matters: If we can't separate AI-made content from human-created material, governments and businesses will find themselves increasingly vulnerable to new kinds of attacks — both narrowly targeted operations and broader misinformation offensives.
- Bad actors succeed when the public believes it's impossible to know what's true and what isn't.
What they're saying: "We are now well past the point where humans can reliably distinguish between synthetically generated text, audio and images," Syracuse University research professor Jason Davis tells Axios.
- Davis focuses on detecting misinformation in his role leading the Semantic Forensics program, funded by DARPA.
- "While the capabilities in synthetic video generation are still a few steps behind these other media types, it is moving very quickly and I expect these capabilities to be on par with other media types in a matter of months, rather than years," he says.
- Even when we are able to detect AI-generated content, Davis says, we do it "after the fact in reactive mode" when the damage is already done.
Case in point: Commercially available tools to detect AI-generated content don't work very well, despite what their makers claim.
- Watermarking technology favored by big tech companies is particularly unsuitable for preventing the spread of misinformation.
- No one has figured out how to create watermarks that can't be circumvented by determined bad actors.
- For watermarks to succeed, everyone creating generative AI tools must agree to implement them.
- Criminals can also use off-the-shelf generative AI tools that don't produce watermarks.
Zoom in: Instead of focusing on detection tools, tech companies have focused on labeling AI content — but labeling is fundamentally flawed, since so much content today is already a collaboration between human and machine.
- In July, Meta changed the wording on its labeling from "Made with AI" to "AI info" after complaints that its algorithm labeled content as AI-generated when photographers had just made minor edits with retouching tools.
- "Like others across the industry, we've found that our labels based on these indicators weren't always aligned with people's expectations and didn't always provide enough context," Meta announced in an update to a blog post.
Zoom out: Adobe focuses its labeling efforts on provenance, which means making it clear where a piece of content was generated and what happened to it along the way.
- In 2019, the company launched the Content Authenticity Initiative, which focuses on free provenance tools like Content Credentials.
- Adobe calls Content Credentials a "nutrition label" for digital content that includes the date and time the content was created and edited and signals "whether and how AI may have been used."
- "Content Credentials aren't meant to be a detection tool for whether an image is fake or not," Adobe spokesperson Andrew Cha tells Axios. "It is used as a tool to surface provenance information to consumers so that they can make an informed decision about the trustworthiness of the content."
Between the lines: Lawmakers have introduced policies around the use and disclosure of synthetic material in political ads.
- For disclosure rules to work, they must "have teeth," Subramaniam Vincent, director of journalism and media ethics at Santa Clara University's Markkula Center for Applied Ethics, tells Axios. And "they need real and timely enforcement."
- While there are no federal laws around the use of AI in campaign ads, about 20 states have passed their own laws around AI use and disclosure.
Yes, but: While Indiana is one of those states — its law says AI-generated content in campaign materials must be disclosed — that didn't seem to deter Republican Sen. Mike Braun from running a campaign ad depicting his opponent standing at a podium surrounded by supporters holding signs that read "No Gas Stoves!"
- Politico reports that the ad was digitally altered and that gas stoves were not discussed at the event.
- Braun did not apologize for the ad, per Axios Indianapolis, but has replaced it with a version that doesn't include the manipulated image. "No campaign is going to be perfect," Braun said.
- The campaign ad, including the "No Gas Stoves!" image and a label that says the video was digitally altered or artificially generated, is still posted on Braun's Facebook page.
