Google is stopping generating images of people on Gemini

Google is stopping generating images of people on Gemini

Google announced on Thursday that it would stop creating images of people using its generative artificial intelligence (AI) tool Gemini due to “current issues” with the feature. ©.

The IT giant introduced this new image generation feature in its artificial intelligence software Gemini in early February, which was simultaneously launched in Canada.

Since then, critics have emerged on social media pointing out inaccuracies regarding certain images created, particularly regarding gender and diversity in historical topics.

Launch of the Twitter widget. Skip widget ?End of Twitter widget. Return to the top of the widget?

Google did not refer to specific images in its statement, but Internet users on X (formerly Twitter) noted that twins appeared to underrepresent white people in the images generated. Among other things, several Internet users posted a query about a German soldier from 1943 during the Nazi regime, which resulted in images of Asian or black-skinned soldiers.

A former Google employee also told X that it was difficult to get Google Gemini to admit that white people existed.

Launch of the Twitter widget. Skip widget ?End of Twitter widget. Return to the top of the widget?

We are working to resolve current issues with Gemini's imaging functionality. In the meantime, we will pause people image generation and will release an improved version of this feature soon, said Jack Krawczyk, Google product director responsible for Gemini, in a statement sent to AFP.

Gemini's AI image generation results in a wide variety of people. And that's generally a good thing because people use it all over the world. But in this case we missed our chance.

In a reaction posted on Google (Gemini) and OpenAI (ChatGPT), which he describes as racist and woke.

Launch of the Twitter widget. Skip widget ?End of Twitter widget. Return to the top of the widget?

By unveiling Gemini to journalists in Canada in early February, Google made it clear that much of the work in fundamental research and the responsible use of AI is being done in Canada.

Not a premiere

This isn't the first time Google's artificial intelligence has been highlighted for diversity issues.

About ten years ago, Google's Photos app identified a few black people in a photo as gorillas. The Californian giant had to apologize.

Google isn't the only company criticized for producing images that contain stereotypes. In particular, a Washington Post investigation (New Window) found that the Stable Diffusion image generator showed that food aid recipients are mostly people with darker skin, while in the United States they are predominantly white.

Since the end of 2022 and the success of ChatGPT, generative AI capable of producing all types of content (text, sounds, images or videos) in current language upon simple request has been generating great enthusiasm. Tech giants are racing to provide AI tools to organizations and individuals.

Thanks to OpenAI's Sora software, unveiled in mid-February, this technology has become increasingly impressive and is now capable of creating one-minute videos. It has also led to the introduction of protective measures.

Last week, twenty of the most advanced companies in the field, including Meta, Microsoft, Google and OpenAI, announced that they are committed to developing new techniques to identify disinformation content using AI.

With information from Agence France-Presse