Political advertising Meta ensures transparency in the use of artificial

Political advertising: Meta ensures transparency in the use of artificial intelligence

To prevent voters from being misled by misleading messages, Meta (Facebook, Instagram) will require political campaigns to be transparent about the use of artificial intelligence (AI) in advertising – an issue that is becoming increasingly important given the upcoming American The 2024 presidential election raises many worrying discussions.

• Also read: What the “transparency reports” of social media in Europe reveal

• Also read: Elon Musk’s company xAI launches its first artificial intelligence model

• Also read: USA, China and EU sign first global declaration on AI risks

“Advertisers must disclose whenever an election, political or social issue advertisement contains a photorealistic image or video or realistic sound that has been digitally created or altered to depict a real person saying or doing something that they did not say or do has. “ the social media giant announced in a statement on Wednesday.

This new regulation will apply worldwide next year.

This includes advertisements that depict “a realistic-looking person who does not exist, or a realistic-looking event that did not happen” or “a realistic-looking event that would have happened but is not a faithful image, Video or audio recording” represent the event.

In all three cases, Meta “adds information to the display.”

Advertisers are not required to report digital edits that have no impact on the message, such as certain crops or color corrections to a photo.

Differentiation between real and AI

The rise of generative AI, which makes it possible to produce text, images and sounds in everyday language upon simple request, is making it easier to create all types of content, including “deepfakes,” photos or videos manipulated for misleading purposes.

From Washington to Brussels, authorities are trying to regulate this new technology, largely because of the challenges it poses to democracy.

US President Joe Biden signed a decree at the end of October that provides companies in the industry with rules and guidelines for the security and use of their AI tools.

The 80-year-old Democrat mentioned that he had seen a deepfake video of himself. “I wondered when I could have said that?” he said, moved by the idea that people with bad intentions defraud families by posing as relatives.

In particular, the White House wants companies to develop tools to easily identify content produced with AI.

Microsoft also unveiled a series of “election protection” initiatives on Wednesday, including a tool for political candidates that allows them to (digitally) watermark their content, thereby authenticating it.

disinformation

The IT group will also set up a team to help political campaign managers better understand the use of AI, as well as a center to help “democratic governments around the world implement secure and resilient electoral processes.”

“Over the next 14 months, more than two billion people around the world will have the opportunity to vote in national elections,” Microsoft President Brad Smith said in a press release.

Meta is already in the crosshairs of authorities, from protecting personal data to protecting children.

Since the Cambridge Analytica and Facebook scandal, which helped win Donald Trump in the US and Brexiteers in the UK in 2016, the Californian company has taken numerous measures to combat disinformation on its platforms.

“As always, we remove content that violates our rules, whether created by AI or by a human,” the company reminded on Wednesday.

“Our independent fact-checking partners review and evaluate viral misinformation, and we will not allow an ad to run if it is determined to be false, altered, partially false, or lacking context.”

AFP is one of dozens of media companies that Meta pays worldwide as part of its content verification program.