Videos viewed thousands of times on Facebook extol the benefits of slimming or diabetes products by exploiting the image of star TV presenters from American channels such as CBS or CNN.
However, these videos are created entirely using artificial intelligence (AI) software. Deepfakes, these increasingly common and realistic digital manipulations, then endanger the reputation of traditional media.
Some presenters and journalists whose identities were stolen responded directly to the broadcast of these manipulated videos on the networks.
“I have never heard of or used this product! Don’t let these AI-generated clips fool you,” CBS anchor Gayle King posted on her Instagram page in October.
Other videos even use and distort the words of billionaire and Tesla boss Elon Musk for commercial purposes.
These deepfakes, which promote dubious products and all kinds of investment plans, often link to e-commerce platforms and short-lived websites, which, however, disappear just a few days after being distributed on the networks.
Since 2020, Meta – the parent company of Facebook and Instagram – has banned the distribution of these videos on its platforms, with the exception of parodies and certain satirical content. These clips, many examples of which were analyzed and verified by AFP, nevertheless continue to circulate freely on the Internet.
Voice clone
“There is a resurgence of these types of videos, which use a sound sample of just two minutes and reproduce a person’s voice in a completely new fictional sequence with synchronized mouth movements,” explains Hany Farid, a specialist professor of digital technology at the University of California. Berkeley.
Audiovisual personalities are an easy target for training artificial intelligence software due to their constant on-screen presence.
Andrea Hickerson, dean of the University of Mississippi School of Journalism, said this is a troubling trend because the public has built a familiarity with these impersonated public figures.
“It’s really dangerous because people don’t expect disinformation to be expressed in this way,” she told AFP.
The content is then presented “like in traditional media”.
“Crisis of Confidence”
Content manipulated by AI is also playing a growing role in financial fraud, which, according to calculations by the American Competition Authority (FTC), will cost Americans around $3.8 billion in 2022.
These scams reportedly targeted people in several countries, including Canada and Australia, and cost some people tens, if not hundreds, of thousands of dollars.
“Frauds are becoming increasingly complex as criminals combine traditional fraud methods with cryptocurrency scams and artificial intelligence programs,” said attorney Chase Carlson in a blog post published earlier this year.
Americans are also increasingly concerned about the use of AI, particularly in politics.
According to a survey released in September by media outlet Axios and Morning Consult, an economic research firm, more than 50% of Americans expect falsehoods produced by AI to have consequences for the 2024 presidential election.
AFP has already analyzed fake videos in which American President Joe Biden appears to announce a general mobilization or in which former Secretary of State Hillary Clinton declares her support for the Republican Governor of Florida, Ron DeSantis, in the next presidential election.
Only a third of Americans trust the news media “a lot” or “somewhat,” according to a Gallup poll conducted in October, the lowest level in 2016.
For Rebekah Tromble, director of the institute, the dissemination of this content, which is sometimes easy to detect due to its poor quality, risks fomenting a “crisis of trust” among the public in the media and institutions for data, democracy and politics George Washington University.
“Quality information is always available and with a good dose of skepticism we can separate fact from fiction,” emphasizes the expert, who urges caution before distributing any kind of content online.