Political advertising Meta ensures transparency in the use of artificial

Meta complains about an increase in online propaganda campaigns from China

American group Meta on Thursday condemned a rise in misleading online propaganda campaigns from China related to the upcoming 2024 elections in the United States and elsewhere in the world.

• Also read: Targeted advertising: EU receives complaint from consumer associations against Meta

• Also read: $100 million deal between Ottawa and Google for journalism

The multinational, whose portfolio also includes Facebook and Instagram, said it had dismantled five coordinated influence networks outside China this year.

“Foreign threat actors are trying to influence voters online ahead of next year’s elections, and we must remain vigilant,” said the American giant’s head of global threat intelligence, Ben Nimmo, at the launch of the latest security report.

According to Meta, 4,789 fake Facebook accounts were removed that were part of a campaign about US domestic politics and relations with China.

These reports praised China, attacked critical voices and copied and pasted real online messages from U.S. politicians that could stoke partisan divisions, the report said.

“As the election campaign gains momentum, we should expect foreign influence operations to seek to exploit real political groups and debates rather than create original content,” Nimmo said.

“We expect China-based influence operations to begin targeting these debates as relations with China become an election issue,” he added.

Meta locates the source of these networks in China, but does not attribute them to the government itself.

The most productive breeding ground for such networks remains Russia, where, according to Meta, they are particularly interested in the war in Ukraine.

According to the report, websites linked to Russia-based campaigns have recently begun using the war between Hamas and Israel to tarnish the image of the United States.

Meta’s team of security experts believes there will be attempts to influence upcoming elections through fake “leaks” of supposedly hacked material.

“We hope that people will try to think carefully about sharing political content online,” Mr Nimmo said. “It is important for political groups to be aware that heightened partisan tensions can be exploited by foreign threat actors.”

According to the report, the false propaganda campaigns extend outside of Meta to other social networks, blogs, forums and websites.

Artificial intelligence (AI) with programs like ChatGPT is used to produce convincing fake content for propaganda campaigns, explained Meta’s head of security policy, Nathaniel Gleicher, during the presentation.

“Threat actors can use AI to create larger volumes of compelling content, even if they don’t have the cultural or linguistic knowledge to speak to their audience,” said Gleicher.

“Given the number of elections expected around the world in 2024, this means we must all prepare for a greater volume of virtual content and our defenses must evolve to meet this challenge.”