Eyeing the prospect of candidate “deepfakes” in the 2024 elections, the Federal Election Commission has joined the debate on artificial intelligence (AI), voting unanimously at its August 10 meeting to move forward with a rulemaking on deceptive campaign ads.

The rapid acceleration of generative AI has raised questions about how the technology could be deployed to mislead voters, for example, by creating video or audio of a candidate saying something damaging they never in fact uttered. With these questions in mind, the Commission voted to ask the public for comment on whether the agency should initiate a formal rulemaking to ban “deliberately deceptive Artificial Intelligence campaign ads,” often referred to as “deepfakes.”

Existing laws prohibit candidates and their representatives from fraudulently misrepresenting themselves as acting on behalf of another candidate or party in a damaging way, and any person from fraudulently misrepresenting themselves as acting on behalf of a candidate or party for the purpose of raising money. The law also prohibits conspiracy to engage in misrepresentation schemes.

Historically, these provisions have been used to crack down on sham websites that look and feel like a candidate’s website to the average observer, but in fact route donations to bad actors. If adopted, the proposal would update the FEC’s regulations to define AI-generated deepfakes as a form of prohibited misrepresentation.

Commissioners from both sides of the aisle have expressed concern about the potential impact of generative AI in the coming 2024 elections, but the rulemaking is not without skeptics. The main point of discussion at the August Commission meeting was whether the Commission has the authority to adopt a regulation prohibiting a person from making intentionally deceptive ads, whether generated by AI or not.

Republican Commissioner Allen Dickerson noted that there are significant First Amendment concerns with the proposal and that it might exceed the agency’s authority. He expressed concern that while current law prohibits someone from pretending to be a candidate or hold themselves out as working for a candidate or party, it does not prohibit an opponent from creating an AI-generated depiction of the candidate saying something damaging—unless the ad is pretending to be from the candidate, their campaign, or a surrogate. In Commissioner Dickerson’s view, the FEC should not act without further guidance from Congress. Nonetheless, he joined other Commissioners in seeking comment on his concerns, as well on other issues that may be implicated by a rulemaking in this area.

The 60-day comment period is open through October 16, 2023. If you have questions about the rulemaking, or wish to submit comments, connect with Venable’s Political Law Practice.