Google Mandates Disclosure for Digitally Altered Election Ads

Google Mandates Disclosure for Digitally Altered Election Ads

Key Points:

  • Advertisers must select a checkbox in their campaign settings’s “altered or synthetic content” section.
  • Google will generate in-ad disclosures for mobile feeds, shorts, and in-stream ads on computers and TV, while other formats must include a prominent user disclosure.
  • The rise of generative AI and deepfakes has increased concerns about their potential misuse in creating misleading content.
  • OpenAI disrupted AI-driven covert influence operations, and Meta requires disclosure of AI-altered political ads on its platforms.

Google announced on Monday that it will require advertisers to disclose election ads featuring digitally altered content that depicts real or realistic-looking people or events. This move is part of Google’s ongoing efforts to combat election misinformation.

The new disclosure requirements fall under Google’s political content policy. Advertisers must now select a checkbox in their campaign settings’s “altered or synthetic content” section. This update addresses growing concerns about the misuse of generative AI, which can quickly create text, images, and videos in response to prompts. The rise of deepfakes—content manipulated to misrepresent individuals convincingly—has further complicated the distinction between genuine and fake media.

Google will generate an in-ad disclosure for feeds and shorts on mobile phones and in-streams on computers and television. For other ad formats, advertisers must provide a “prominent disclosure” for users. According to Google, the “acceptable disclosure language” will vary based on the ad’s context.

This policy update comes amid increasing instances of AI-generated misinformation. For example, during India’s general election in April, fake videos of Bollywood actors criticizing Prime Minister Narendra Modi went viral. These AI-generated videos urged people to vote for the opposition Congress party.

In a related effort to curb AI misuse, Sam Altman-led OpenAI reported in May that it had disrupted five covert influence operations attempting to use its AI models for deceptive activities online, aiming to manipulate public opinion or influence political outcomes.

Similarly, Meta Platforms announced last year that it would require advertisers to disclose if AI or other digital tools were used to alter or create political, social, or election-related ads on Facebook and Instagram.

EDITORIAL TEAM
EDITORIAL TEAM
TechGolly editorial team led by Al Mahmud Al Mamun. He worked as an Editor-in-Chief at a world-leading professional research Magazine. Rasel Hossain and Enamul Kabir are supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial knowledge and background in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.

Read More

We are highly passionate and dedicated to delivering our readers the latest information and insights into technology innovation and trends. Our mission is to help understand industry professionals and enthusiasts about the complexities of technology and the latest advancements.

Follow Us

Advertise Here...

Build brand awareness across our network!