Google to Require ‘Prominent’ Disclosures for AI-Generated Election Ads
Views: 3630
2023-09-07 03:56
Alphabet Inc.’s Google will soon require that all election advertisers disclose when their messages have been altered or

Alphabet Inc.’s Google will soon require that all election advertisers disclose when their messages have been altered or created by artificial intelligence tools.

The policy update, which applies starting mid-November, requires election advertisers across Google’s platforms to alert viewers when their ads contain images, video or audio from generative AI — software that can create or edit content given a simple prompt. Advertisers must include prominent language like, “This audio was computer generated,” or “This image does not depict real events” on altered election ads across Google’s platforms, the company said in a notice to advertisers. The policy does not apply to minor fixes, such as image resizing or brightening.

The update will improve Google’s transparency measures for election ads, the company said, especially given the growing prevalence of AI tools — including Google’s — that can produce synthetic content. “It'll help further support responsible political advertising and provide voters with the information they need to make informed decisions,” said Michael Aciman, a Google spokesperson.

Google’s new policy doesn’t apply to videos uploaded to YouTube that aren’t paid advertising, even if they are uploaded by political campaigns, the company said. Meta Platforms Inc., which owns Instagram and Facebook, and X, formerly known as Twitter, don't have specific disclosure rules for AI-generated ads.

Like other digital advertising services, Google has been under pressure to tackle misinformation across its platforms, including false claims about elections and voting that could undermine trust and participation in the democratic process. In 2018, Google required election advertisers to go through an identity verification process, and a year later it added targeting restrictions for election ads and expanded its policy to include ads about state-level candidates and office holders, political parties and ballot initiatives. The company also touts its ads transparency center, where the public can look up who purchased election ads, how much they spent, and how many impressions ads received across the company’s platforms, which includes its search engine and the video platform YouTube.

Still, the misinformation problem has persisted — especially on YouTube. In 2020, though it enforced a separate policy for election ads on the platform, YouTube allowed regular videos spreading false claims of widespread election fraud under a policy that permitted videos that comment on the outcome of an election; the videos were reportedly viewed more than 137 million times the week of Nov. 3. YouTube only changed its rules after the so-called safe harbor deadline passed on Dec. 8, 2020 — the date by which all state-level election challenges, such as recounts and audits, were supposed to be completed.

And in June of this year, YouTube announced that it would stop removing content that advances false claims of widespread election fraud in 2020 and other past US presidential elections.

Google said YouTube’s community guidelines, which prohibits digitally manipulated content that may pose a serious risk of public harm, applies to all video content uploaded to the platform. And the company said that it had enforced its political ads policy in previous years, blocking or removing 5.2 million ads for violating its policies, including 142 million for violating Google’s misrepresentation policies in 2022.

Tags sof elect ai alltop us northam tmt gen med googl cos business top tecsvc pol tec gov internet techtop industries epelections