Some voters in New Hampshire recently experienced an unusual phone call that purported to be from President Joe Biden. The caller advised residents to abstain from voting in the primary election last week and instead “save your votes” for the general election in November.
However, this suggestion was perplexing since voters have the right to participate in both elections. The reason behind this unusual call became apparent—it was an AI voice-generated robocall designed to mimic President Biden’s voice. You can listen to one of these calls here, provided by The Telegraph.
This incident serves as a real-world illustration of the potential misuse of AI technology by malicious actors. Recognizing the implications, the Federal Communications Commission (FCC) is now considering taking action against AI-generated calls.
FCC Chairwoman Jessica Rosenworcel announced a proposal urging the FCC to classify calls produced by artificial intelligence as “artificial” voices under the Telephone Consumer Protection Act (TCPA). If implemented, this classification would render AI-generated robocalls illegal.
The TCPA is frequently employed by the FCC to curb unwanted calls from telemarketers, prohibiting the use of artificial or prerecorded voice messages, as well as automatic telephone dialing systems.
Rosenworcel emphasized the growing threat of AI-generated voice cloning and images, which can deceive consumers into believing fraudulent activities are legitimate.
She stated, “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.” The FCC aims to combat this emerging technology by deeming it illegal under existing law, providing state Attorneys General offices with additional tools to combat scams and protect consumers.
The timing of Rosenworcel’s statement suggests that the Biden robocalls have raised concerns about the potential misuse of AI-generated voices in telemarketing scams and even election fraud. While AI companies have taken some preventive measures, such as the suspension of the user responsible for the Biden robocalls by ElevenLabs, challenges persist.
ElevenLabs stated its dedication to preventing the misuse of audio AI tools and taking incidents of misuse seriously. However, as illustrated by recent instances like nonconsensual AI-generated pornographic images of Taylor Swift, differing opinions on the ethical usage of AI products within the space persist.