Exclusive: Fraudsters create 200+ AI slop websites in one operation
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
Researchers have uncovered a network of more than 200 AI slop websites operated by a single group and spun up using basic AI prompts, according to new research shared first with Axios.
Why it matters: The operators left their AI content-generation prompts exposed inside the sites' JavaScript code — giving a rare look into how AI is used to supercharge scams.
The big picture: These operations often serve one of two purposes.
- Create a network of phishing websites to trick unsuspecting internet users into sharing sensitive data and payments.
- Collect money from advertisers who are duped into paying to place their ads with them.
Zoom in: Researchers at DoubleVerify, a cybersecurity firm focused on digital media, found a coordinated network of more than 200 "made-for-advertising" websites spun up using templated prompts in a large language model.
- The operation, which DoubleVerify called "AutoBait," created a range of unsophisticated websites featuring slideshows with attention-grabbing photos and headlines designed primarily to capture digital advertising revenue.
- The articles often appeared nonsensical, reinforcing that the goal was not reader loyalty, but rapid ad impressions.
How it works: Fraudsters gave an LLM very specific instructions for creating these fake websites, according to the prompt left exposed in their JavaScript code.
- For instance, the model was told to front-load the first few slides with "the most sensational or shocking points — anything that stops someone mid-scroll."
- Headlines needed to be "ultra-literal" and the body text needed to "inject real emotion (fear, anger, shock, relief) into every paragraph."
- The LLM was also told to generate "ultra-realistic" photos that look like they were "casually taken on a smartphone by a real person." The photos needed to be "unfiltered, unstaged, and emotionally authentic."
Between the lines: Many of the domains previously hosted legitimate content, but the owners let their registration lapse.
- This allows scammers to capitalize on existing domain history and traffic, Gilit Saporta, senior director of fraud and quality analysis at DoubleVerify, told Axios.
- One site was tied to a financial blogger who died.
The intrigue: Scam websites like these existed before AI — but AI tools have dramatically lowered the cost and time needed to scale such an operation.
- "AI is supercharging so many types of attacks, and we know that fraudulent actors are always going to be early adopters of any new, cool trick that they can use," Saporta said.
Threat level: DoubleVerify found that the actors behind "AutoBait" spent less than $2.25 to generate each article page.
- Each site was covered in multiple ad banners, providing opportunities for fraudsters to quickly cover their costs, per the report.
- That means legitimate brands can unknowingly fund the very networks built to manipulate users and exploit the ad ecosystem.
What to watch: Companies, including DoubleVerify, have released tools designed to help detect AI slop websites based on a set of classifications and characteristics.
Go deeper: Platforms struggle to combat spam economy
