Is a phone call that uses artificial intelligence to imitate a real person “an artificial or prerecorded voice,” subject to the restrictions of the Telephone Consumer Protection Act? The Federal Communications Commission unanimously answered yes in a recent declaratory ruling, foreclosing creative arguments that a “voice clone” is a live call and not an artificial voice subject to the nearly 35-year-old law. The decision, which comes just weeks after thousands of New Hampshire voters reportedly received robocalls impersonating President Biden’s voice urging them not to vote in the state’s primary, has important implications for use of the burgeoning technology in the 2024 elections.
As campaigns and their supporters experiment with new uses for AI technology, the FCC’s declaratory ruling immediately extends existing protections of the TCPA to AI-generated calls, such as those pretending to be a candidate, surrogate, or other voice trusted by the recipients. The ruling will immediately require callers that use AI technologies to simulate human voices to obtain the express consent or express written consent of recipients before calls are placed to residential or wireless numbers, unless an emergency purpose or TCPA exemption applies. AI-generated calls will also need to provide certain identifying information about the party responsible for placing the calls and offer certain opt-out rights.
The FCC’s ruling is not limited to AI-generated content of a political nature but puts those using the technology for advocacy ahead of the 2024 elections on notice, citing specifically to reports of the fake Biden robocalls. “The use of generative AI has brought a fresh threat to voter suppression schemes and the campaign season with the heightened believability of fake robocalls,” wrote one commissioner.
Existing exceptions to the TCPA for political and nonprofit calls will continue to apply to such calls using AI. For example, nonprofit organizations need not obtain consent for AI-generated calls placed to residential lines, provided the caller makes no more than three such calls to a particular residential line within any 30-day period. But nonprofits that send AI-generated content to cell phones in any frequency must have prior consent of the recipients and may not take the position that the simulation of a human voice avoids the need for consent to receive prerecorded messages.
The FCC acknowledged the opportunities AI technologies offer and the fact that not all calls employing AI are deceptive or fraudulent. Nevertheless, the ruling will require those placing calls that use AI to proceed with greater caution to avoid investigations, litigation, and penalties and adds to the growing web of regulation from federal, state, and local regulators that organizations must contend with when incorporating AI into their advocacy.
Have questions about your organization’s use of robocalls or text messages? Connect with Venable’s Nonprofit and Political Law Practices.