TikTok

TikTok

TikTok accounts spread falsehoods using voices generated by artificial intelligence. Image: The New York Times.

In a slickly produced TikTok video, you can hear former President Barack Obama — or a voice so similar to him it’ll give you goosebumps — defending himself against an explosive new conspiracy theory about the sudden death of his former chef.

“Although I cannot understand the basis of the allegations made against me, I urge everyone to remember the importance of unity, understanding and not rushing to judgement.”

In fact, the voice was not that of the former president. It was a convincing fake created using artificial intelligence using sophisticated new tools that can clone real voices to create artificial intelligence puppets with just a few mouse clicks.

AI voice creation technology has gained traction and widespread recognition since companies like ElevenLabs launched a range of new tools late last year. Since then, audio deepfakes have quickly become a new weapon in the battlefield of online misinformation, threatening to accelerate the spread of political misinformation ahead of the 2024 election by giving creators the opportunity to spread their conspiracy theories to celebrities, news anchors and politicians in the to put your mouth to rest.

The fake audio adds to the threats posed by artificial intelligence through deepfake videos, ChatGPT texts seemingly written by humans, and images from services like Midjourney.

Agencies tasked with monitoring disinformation have noted that the number of videos featuring artificial intelligence voices has increased as content producers and disinformation sellers adopt the new tools. Social platforms like TikTok have difficulty labeling and labeling this content.

NewsGuard, a company that monitors online misinformation, discovered the video that sounded like Obama. According to a report the group published in September, the video was posted on one of seventeen TikTok accounts that promoted unfounded claims with fake audio content that NewsGuard identified. The accounts mostly posted videos of celebrity rumors told using artificial intelligence, but also spread the unfounded claim that Obama was gay and the conspiracy theory that Oprah Winfrey was involved in the slave trade. The channels had collectively received hundreds of millions of views and comments, suggesting that some viewers believed the claims.

TikTok requires labels that classify realistic, AI-generated content as fake, but these did not appear in the videos reported by NewsGuard. TikTok said it removed or delisted several accounts and videos for policy violations related to impersonating news organizations and spreading harmful misinformation. Additionally, the video featuring the AI-generated voice that mimicked Obama’s voice was removed because it violated TikTok’s synthetic media policy by containing highly realistic content that was not flagged as manipulated or fake.

“TikTok is the first platform to offer creators a tool for labeling AI-generated content and a founding member of a new industry code of best practices that promotes responsible use of synthetic media,” said Jamie Favazza, spokesperson for TikTok, a framework that recently launched by the nonprofit Partnership on AI.

Although NewsGuard’s report focused on TikTok, a platform that is increasingly becoming a news source, similar content was shared on YouTube, Instagram and Facebook.

Platforms like TikTok allow artificial intelligence-generated content from public figures, including news anchors, as long as they don’t spread misinformation. Parody videos showing AI-generated conversations between politicians, celebrities or business leaders – some of whom are dead – have become widespread since the tools became widespread. Manipulated audio adds a new level to misleading videos from platforms that have already featured fake versions of Tom Cruise, Elon Musk and news anchors like Gayle King and Norah O’Donnell. Recently, TikTok and other platforms faced a series of misleading ads featuring deepfakes of celebrities like Cruise and YouTube star MrBeast.

The power of these technologies could profoundly impact audiences. “We know that audio and video may stick in our minds more than text,” said Claire Leibowicz, director of artificial intelligence and media integrity at the Partnership on AI, an organization that has worked with tech companies and media outlets on a series of recommendations Create, share and distribute artificial intelligence-generated content.

Last month, TikTok announced that it was launching a label that users can select to indicate whether their videos use artificial intelligence. In April, the app began requiring users to disclose manipulated media with realistic scenes and banning deepfakes from teenagers and private individuals. David Rand, a professor of management science at the Massachusetts Institute of Technology, whom TikTok asked for advice on formulating the new labels, said they are of limited use on the issue of misinformation because “people trying to mislead them.” lead, it is not.” I will put a label on your contents.”

TikTok also announced last month that it was testing automated tools to detect and label AI-generated media, which Rand said would be more useful, at least in the short term.

YouTube bans the use of artificial intelligence in political ads and requires other advertisers to label their ads if they use artificial intelligence. In 2020, Meta, the company that owns Facebook, added a label describing whether a video is “manipulated” to its fact-checking toolkit. YX, formerly known as Twitter, requires misleading content to be “altered, manipulated or fabricated in a materially misleading manner” to violate its policies. The company did not respond to requests for comment.

Obama’s artificial intelligence voice was created using tools from ElevenLabs, a company that burst onto the international stage late last year with its free artificial intelligence tool that converts text to speech and is capable of producing realistic audio in seconds came. The tool also allowed users to upload recordings of a person’s voice and create a digital copy.

After the tool’s launch, users of 4chan, the right-wing discussion forum, organized a fake version of actress Emma Watson reading a long anti-Semitic rant.

ElevenLabs, a 27-employee company based in New York City, responded to the abuse by making the voice cloning feature available only to paying users. The company has also launched an AI detection tool to identify AI content generated by its services.

“More than 99 percent of our platform’s users create interesting, innovative and useful content, but we are aware that there are cases where it is abused and we have continued to develop and publish protections to prevent this,” said a representative from ElevenLabs. through a statement sent by email.

Leibowicz of the Partnership on AI said synthetic audio presents a unique challenge for listeners compared to visual impairment.

“If we were a podcast, would we need a day every five seconds?” Leibowicz asked. “How do you get a coherent message in a long audio?”

(From The New York Times)