Google said Thursday it would “pause” its Gemini chatbot’s image generation tool after it was widely panned on social media for creating “diverse” images that were not historically or factually accurate – such as Black Vikings, Native American popes and female NHL players.
Users blasted Gemini as “absurdly woke” and “unusable” after requests to generate representative images for subjects such as America’s Founding Fathers resulted in bizarrely revisionist pictures.
“We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a statement posted on X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”
Examples included an AI image of a Black man who appeared to represent George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman dressed in papal attire even though all 266 popes throughout history were white men.
In one shocking example uncovered by The Verge, Gemini even generated “diverse” representations of Nazi-era German soldiers, including an Asian woman and a Black man decked out in 1943 military garb.
Google had earlier admitted that the chatbot’s erratic behavior needed to be fixed.
“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.
“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
The Post has reached out to Google for further comment.
It was a significant misstep for Google, which had just rebranded its main AI chatbot product under the Gemini name earlier this month and introduced heavily-touted new features – including image generation.
The blunder also came days after OpenAI, which operates the popular ChatGPT, introduced a new AI tool called Sora that creates videos based on users’ text prompts.
Since Google has not published the parameters that govern the Gemini chatbot’s behavior, it is difficult to get a clear explanation as to why it was inventing diverse versions of historical figures and events.
When asked by The Post to provide its trust and safety guidelines, Gemini acknowledged that they were not “publicly disclosed due to technical complexities and intellectual property considerations.”
The chatbot also admitted it was aware of “criticisms that Gemini might have prioritized forced diversity in its image generation, leading to historically inaccurate portrayals.”
“The algorithms behind image generation models are complex and still under development,” Gemini said. “They may struggle to understand the nuances of historical context and cultural representation, leading to inaccurate outputs.”
Source