Social networks Meta recognizes images generated by artificial intelligence

Social networks: Meta recognizes images generated by artificial intelligence

The American giant Meta wants to identify “in the coming months” every image generated by artificial intelligence (AI) published on its social networks, a decision made against the backdrop of the fight against disinformation at the start of a rich year at the ballot box .

• Also read: A Quebecer behind the brand new Apple Vision Pro virtual reality headset

• Also read: Social media bosses questioned about child protection in US Senate

• Also read: The milestone of 5 billion users on social networks has been reached

“In the coming months, we will label images that users post on Facebook, Instagram and Threads” when we can detect industry-standard signals that show they were generated by AI, Nick Clegg, Meta's head of international affairs, announced on Tuesday in a blog post.

If Meta has already implemented these labels on images created with its Meta AI tool, launched in December, “we want to be able to do the same with content created with tools from other companies” such as Google, OpenAI, Microsoft, Adobe, Midjourney or even Shutterstock, he added.

“We are building this capability now and will begin applying labels in all languages ​​supported by each application in the coming months,” the executive further emphasized.

The announcement comes as the rise of generative AI is raising fears that people could use these tools to sow political chaos, including through disinformation, ahead of several key elections this year, including in the United States or misinformation.

Beyond these ballots, according to many experts and regulators, the development of generative AI programs is accompanied by the production of a flood of degrading content, such as fake pornographic images (“deepfakes”) of famous women, a phenomenon that also targets anonymous people.

For example, a fake image of American superstar Taylor Swift was viewed 47 million times on X (formerly Twitter) at the end of January before it was deleted. According to American media, the publication remained live on the platform for about 17 hours.

Digital “tattoo”

While Nick Clegg admits that this large-scale labeling will “not completely eliminate” the risk of false images being created, “it would certainly minimize their spread”, “within what the technology currently allows”.

How does this work exactly? In addition to placing visible marks on AI-generated images, Meta also uses “watermarking” technology, a form of digital “watermarking” that “consists of inserting, through AI, an invisible mark into the generated image” so that it can be recognized by social media Media is recognized by networks, explains Gaëtan Le Guelvouit, an expert in digital watermarking at the b<>com Technological Research Institute, to AFP.

“Every time we publish an image on one of their social networks, some processing takes place: either image compression, resizing… It doesn't cost much to build a little stone detection into it. They have the means to do it,” he adds.

“It's not perfect, the technology isn't quite there yet, but it is the most advanced attempt yet by a platform to provide meaningful transparency to billions of people around the world,” Nick Clegg told AFP.

“I really hope that by doing this and taking the lead, we encourage the rest of the industry to work together and try to develop the common (technical) standards that we need,” the meta executive added. , meaning it is willing to “share” its open technology “as widely as possible.”

“According to our research, this is not an easy task, but it is likely an important element that will increase trust in generative AI and technological platforms,” emphasizes Duncan Stewart, director of technology, media and telecommunications research at Deloitte, to AFP.

“It is important that companies work together and define or agree on common technical standards. “Isolated solutions run the risk of being inadequate,” he adds.

OpenAI, which founded ChatGPT, also announced the launch of tools to combat disinformation in mid-January, emphasizing that its image generator DALL-E 3 contains “guardrails” to prevent users from uploading images of real people, especially candidates for create political purposes.