The Federal Election Commission on Thursday took a small step toward regulating so-called deepfake material in political ads, agreeing to seek public comment on whether existing federal rules against fraudulent campaign advertising apply to ads that use artificial intelligence technology.
But it's unclear whether the six-member commission, evenly divided along party lines, will move forward with formal regulations once the two-month public comment window closes, likely in October.
The unanimous vote to allow public comment comes after the commissioners had deadlocked on the issue in June. On Thursday, one Republican commission member, Allen Dickerson, reiterated his argument that the agency lacked the authority from Congress to weigh in on the matter. He said the agency's existing rules center on candidates misrepresenting themselves and do not extend to regulating false claims made by a politician's rivals.
The decision to open public comment followed the left-leaning watchdog group Public Citizen filing a second petition -- following June's stalemate -- that once again urged the commission to ban candidates and political parties from targeting their rivals with "deliberately deceptive" ads generated with AI.
And, as CNN previously reported, dozens of Democratic lawmakers backed Public Citizen's move and called on the agency to consider cracking down on deepfakes.
Concerns have grown that the soaring use of powerful AI technology is outpacing efforts to regulate it on the campaign trail.
It has already begun to crop up in the 2024 presidential contest. In June, for instance, Florida Gov. Ron DeSantis' presidential campaign released a video on social media that appeared to use images generated by artificial intelligence to depict Donald Trump, the front-runner for the GOP nomination, hugging Dr. Anthony Fauci, a bête noir of the former president's.
"If we don't get ahead of this challenge, the election and people's trust in elections will be damaged," Lisa Gilbert, Public Citizen's executive vice president, told CNN this week. "This is not a partisan problem."
A group of Democratic lawmakers in Congress has introduced legislation to require disclaimers on political ads that use images or videos generated by AI, but the bills have not attracted Republican support so far.
State lawmakers are moving to regulate the rapidly evolving technology.
Roughly a dozen states have moved to target the use of nonconsensual deepfakes in pornography, said Matthew Ferraro, an attorney at WilmerHale, who is tracking the state activity. Meanwhile, laws in four states -- California, Minnesota, Texas and Washington state -- take aim at deepfakes in political ads, he said.
The Minnesota and Washington state laws were enacted this year.
"The state legislatures are moving with alacrity," Ferraro said. "Pornography was the focus when it was the most present danger, but now the growth of deepfakes targeting candidates has expanded that focus to include concerns about elections."