Google in trouble over video ads – The Verge

Google explains Gemini's “embarrassing” AI images of various Nazis

Google has issued an explanation for the “embarrassing and inaccurate” images generated by its AI tool Gemini. In a blog post Friday, Google said its model produced “inaccurate historical” images due to optimization issues. The Verge and others caught Gemini earlier this week creating images of racially diverse Nazis and US founding fathers.

“Our optimization to ensure that Gemini displays a series of people did not take into account cases that clearly should not display a series,” Google senior vice president Prabhakar Raghavan wrote in the post. “And second, over time, the model became much more cautious than we intended, refusing to fully respond to certain prompts – and misinterpreting some very innocuous prompts as sensitive.”

Gemini's results for the prompt “create an image of a 19th century U.S. senator.” Screenshot by Adi Robertson

This caused Gemini AI to “overcompensate in some cases,” as we saw with the images of the racially diverse Nazis. It also caused Gemini to become “over-conservative.” This resulted in it refusing to create specific images of a “black person” or a “white person” when asked to do so.

In the blog post, Raghavan says that Google is “sorry that the feature didn't work well.” He also points out that Google wants Gemini to “work well for everyone,” and that means displaying images of different types of people (including different ethnicities) when searching for images of “football players” or “someone who.” “walking a dog” asks. But he says:

However, if you're asking Gemini for images of a specific type of person—say, “a black teacher in a classroom” or “a white veterinarian with a dog”—or people in a particular cultural or historical context, be sure to get one Answer that reflects exactly what you are asking for.

Raghavan says Google will continue to test Gemini AI's image generation capabilities and “work to significantly improve them” before enabling them again. “As we have said from the beginning, hallucinations are a well-known challenge in all LLMs [large language models] – There are cases where the AI ​​simply does something wrong,” notes Raghavan. “This is something we are constantly working to improve.”