Google has apologized for the botched introduction of a new artificial intelligence image-generator, admitting that in some situations, the tool would “overcompensate” by seeking a varied range of people even when such a range made no sense.
On Friday, Google provided a partial explanation for why their photographs included individuals of color in historical contexts where they would not ordinarily be found.
This comes a day after Google announced that it was temporarily halting the Gemini chatbot’s ability to generate images with humans.
This was in response to a social media backlash from some users who claimed the tool had an anti-white bias by generating a racially diverse group of photos in response to text questions.
“Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well.” Said Prabhakar Raghavan who is a senior Vice President and run’s Google search engine.
Google deployed the new image-generating feature to its Gemini chatbot, which was previously known as Bard, around three weeks ago.
It was based on an earlier Google research experiment named Imagen 2. Google has long recognized the difficulty of using such tools.
In a 2022 technical document, the researchers who built Imagen warned that generative AI techniques might be exploited for harassing or propagating misinformation “and raise many concerns regarding social and cultural exclusion and bias.”
These factors influenced Google’s decision not to share “a public demo” of Imagen or its underlying technology, the researchers stated at the time.
Since then, the pressure to publicly reveal generative AI products has increased as a result of a competitive scramble among tech companies seeking to capitalize on the new technology triggered by the introduction of OpenAI’s chatbot ChatGPT.