Asian Spectator

.
Commercial Property

.

FCC bans robocalls using deepfake voice clones − but AI-generated disinformation still looms over elections

  • Written by Joan Donovan, Assistant Professor of Journalism and Emerging Media Studies, Boston University
FCC bans robocalls using deepfake voice clones − but AI-generated disinformation still looms over elections

The Federal Communications Commission on Feb. 8, 2024, outlawed robocalls[1] that use voices generated by artificial intelligence.

The 1991 Telephone Consumer Protection Act[2] bans artificial voices in robocalls. The FCC’s Feb. 8 ruling[3] declares that AI-generated voices, including clones of real people’s voices, are artificial and therefore banned by law.

The move follows on the heels of a robocall on Jan. 21, 2024, from what sounded like President Joe Biden. The call had Biden’s voice[4] urging voters inclined to support Biden and the Democratic Party not to participate in New Hampshire’s Jan. 23 GOP primary election. The call falsely implied[5] that a registered Democrat could vote in the Republican primary and that a voter who voted in the primary would be ineligible to vote in the general election in November.

The call, two days before the primary, appears to have been an artificial intelligence deepfake[6]. It also appears to have been an attempt to discourage voting[7].

The FCC and the New Hampshire attorney general’s office are investigating the call. On Feb. 6, 2024, New Hampshire Attorney General John Formella identified two Texas companies[8], Life Corp. and Lingo Telecom, as the source and transmitter, respectively, of the call.

Injecting confusion

Robocalls in elections are nothing new and not illegal[9]; many are simply efforts to get out the vote. But they have also been used in voter suppression[10] campaigns. Compounding this problem in this case is the application of AI to clone Biden’s voice.

In a media ecosystem full of noise, scrambled signals such as deepfake robocalls make it virtually impossible to tell facts from fakes.

The New Hampshire attorney general’s office is investigating the call.

Recently, a number of companies have popped up online offering impersonation as a service[11]. For users like you and me, it’s as easy as selecting a politician, celebrity or executive like Joe Biden, Donald Trump or Elon Musk from a menu and typing a script of what you want them to appear to say, and the website creates the deepfake automatically.

Though the audio and video output is usually choppy and stilted, when the audio is delivered via a robocall it’s very believable. You could easily think you are hearing a recording of Joe Biden, but really it’s machine-made misinformation.

Context is key

I’m a media and disinformation scholar[12]. In 2019, information scientist Brit Paris[13] and I studied how generative adversarial networks[14] – what most people today think of as AI – would transform the ways institutions assess evidence and make decisions when judging realistic-looking audio and video manipulation. What we found was that no single piece of media is reliable on its face; rather, context matters for making an interpretation.

When it comes to AI-enhanced disinformation, the believability of deepfakes hinges on where you see or hear them or who shares them. Without a valid and confirmed source vouching for it as a fact, a deepfake might be interesting or funny but will never pass muster in a courtroom. However, deepfakes can still be damaging when used in efforts to suppress the vote or shape public opinion on divisive issues.

AI-enhanced disinformation campaigns are difficult to counter because unmasking the source requires tracking the trail of metadata, which is the data about a piece of media. How this is done varies, depending on the method of distribution: robocalls, social media, email, text message or websites. Right now, research on audio and video manipulation is more difficult because many big tech companies have shut down access to their application programming interfaces, which make it possible for researchers to collect data about social media, and the companies have laid off their trust and safety teams[15].

Timely, accurate, local knowledge

In many ways, AI-enhanced disinformation such as the New Hampshire robocall poses the same problems as every other form of disinformation. People who use AI to disrupt elections are likely to do what they can to hide their tracks, which is why it’s necessary for the public to remain skeptical about claims that do not come from verified sources, such as local TV news or social media accounts of reputable news organizations.

It’s also important for the public to understand what new audio and visual manipulation technology is capable of. Now that the technology has become widely available, and with a pivotal election year ahead, the fake Biden robocall is only the latest of what is likely to be a series of AI-enhanced disinformation campaigns, even though these calls are now explicitly illegal.

I believe society needs to learn to venerate what I call TALK: timely, accurate, local knowledge. I believe that it’s important to design social media systems that value timely, accurate, local knowledge over disruption and divisiveness.

It’s also important to make it more difficult for disinformers to profit from undermining democracy. For example, the malicious use of technology to suppress voter turnout should be vigorously investigated by federal and state law enforcement authorities.

While deepfakes may catch people by surprise, they should not catch us off guard, no matter how slow the truth is compared with the speed of disinformation.

This is an updated version of an article originally published on Jan. 23, 2024.

References

  1. ^ outlawed robocalls (apnews.com)
  2. ^ Telephone Consumer Protection Act (www.congress.gov)
  3. ^ Feb. 8 ruling (docs.fcc.gov)
  4. ^ call had Biden’s voice (soundcloud.com)
  5. ^ falsely implied (www.nytimes.com)
  6. ^ an artificial intelligence deepfake (apnews.com)
  7. ^ an attempt to discourage voting (www.doj.nh.gov)
  8. ^ identified two Texas companies (www.doj.nh.gov)
  9. ^ not illegal (www.fcc.gov)
  10. ^ voter suppression (www.thedailybeast.com)
  11. ^ offering impersonation as a service (www.ftc.gov)
  12. ^ media and disinformation scholar (scholar.google.com)
  13. ^ Brit Paris (scholar.google.com)
  14. ^ studied how generative adversarial networks (datasociety.net)
  15. ^ laid off their trust and safety teams (www.cnbc.com)

Authors: Joan Donovan, Assistant Professor of Journalism and Emerging Media Studies, Boston University

Read more https://theconversation.com/fcc-bans-robocalls-using-deepfake-voice-clones-but-ai-generated-disinformation-still-looms-over-elections-223160

Magazine

Indeks massa tubuh mungkin bukan indikator kesehatan terbaik - bagaimana cara memperbaikinya?

BMI dihitung dengan membagi berat badan dalam kilogram dengan tinggi badan dalam meter kuadrat.Christian Delbert/ ShutterstockIndeks massa tubuh atau BMI telah lama menjadi standar untuk mengukur kese...

Hampir 40% laki-laki bermasalah dengan citra tubuhnya, tetapi mereka sulit mendapatkan dukungan

Banyak laki-laki yang berharap mereka lebih berotot atau ramping.chaiyawat chaidet/ ShutterstockGitaris utama The Vamps, James Brittain-McVey, pernah berbicara tentang tekanan yang dia alami dengan ci...

Lelahnya kencan ‘online’: mengapa beberapa orang lebih memilih kencan tatap muka

Prostock-studio / ShutterstockSelama lebih dari dua tahun terakhir, orang-orang yang sedang mencari belahan jiwanya mengalami masa-masa sulit. Lockdown selama pandemi COVID-19 dan ketidakpastian telah...



NewsServices.com

Content & Technology Connecting Global Audiences

More Information - Less Opinion