
Meta headquarters in Menlo Park, California on August 5. Photo: Tayfun Coskun/Anadolu Agency via Getty Images
Meta, the parent company of Facebook and Instagram, announced a new policy Wednesday requiring political advertisers to disclose when they have used AI or other digital methods to create their ads.
Why it matters: There has been growing concern among experts about how generative AI might impact elections by contributing to more misinformation.
- About half of Americans expect misinformation spread by AI to impact who wins next year's presidential election, according to a recent Axios-Morning Consult AI Poll.
- The policy will go into effect next year globally, Meta added.
- Advertisers who don't disclose alterations will see their ad rejected, "and repeated failure to disclose may result in penalties," the company added.
State of play: Starting next year, advertisers will need to disclose when a social issue, electoral, or political ad contains a "photorealistic image or video, or realistic sounding audio" that was digitally created or altered, Meta announced in a blog post Wednesday.
- Examples of this would be ads that show people saying or doing things they hadn't actually done or said or showing synthetically-created people or events.
Details: Advertisers aren't required to disclose changes that are "inconsequential or immaterial" to the claims or issues raised in the ad, such as adjusting image size or sharpness.
- Meta will also add its own additional information to the ad when advertisers reveal its content had been digitally created or altered.
The big picture: The new policy builds on other measures by Meta to improve transparency around ads.
- Political advertisers on Meta are currently required to complete an authorization process and include a "paid for by" disclaimer on their ads, per the New York Times.
- Google announced a similar policy earlier this year, requiring election advertisers to disclose when ads contain synthetic or altered content.