Artificial Intelligence

As campaigns explore new ways to harness artificial intelligence, regulators are rushing to keep pace ahead of the 2024 elections. The explosion in generative AI has put pressure on lawmakers and advertising platforms alike to stay ahead of deepfakes, voice clones, and other political advertising that may deceive voters or spread misinformation, all while balancing the promise of “friendly” applications that increase efficiency and affordability in campaign tools.

But regulating AI in political communications poses unique challenges. What qualifies as deceptive advertising? Can deceptive uses of AI be banned, given the First Amendment’s special protections for political expression? Who is regulating AI-generated political ads, and who is responsible for enforcing any controls? Do advertising platforms have a role in policing the content?

Venable’s Political Law Practice Group is monitoring ongoing efforts to regulate AI in political advertising at the federal, state, and industry levels. The following highlights some of these efforts and the emerging trends.Continue Reading Synthetic Content, Real Regulations: Regulation of Artificial Intelligence in Political Advertising

Eyeing the prospect of candidate “deepfakes” in the 2024 elections, the Federal Election Commission has joined the debate on artificial intelligence (AI), voting unanimously at its August 10 meeting to move forward with a rulemaking on deceptive campaign ads.

The rapid acceleration of generative AI has raised questions about how the technology could be deployed to mislead voters, for example, by creating video or audio of a candidate saying something damaging they never in fact uttered. With these questions in mind, the Commission voted to ask the public for comment on whether the agency should initiate a formal rulemaking to ban “deliberately deceptive Artificial Intelligence campaign ads,” often referred to as “deepfakes.”Continue Reading Federal Election Commission Seeks Comments on AI in Campaign Ads