Google has temporarily stopped its AI tool Gemini from producing images of people after some backlash over historically inaccurate depictions of race.
The AI tool generated false historical images of people of color. For example, pictures were altered to make White U.S. founding fathers appear Black.
Google admits Gemini is "missing the mark."
"Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn't work well," said Prabhakar Raghavan, Google senior vice president. "We’ve acknowledged the mistake and temporarily paused image generation of people in Gemini while we work on an improved version."
The feature had only been out for three weeks. Raghavan said that the model overcompensated in some cases, and was overly conservative in others, in creating its depictions.
The White House wants feedback on keeping AI private or open-source
The Biden administration intends to balance the needs of AI companies with the rights and security needs of consumers and the nation.
"If you prompt Gemini for images of a specific type of person — such as 'a Black teacher in a classroom,' or 'a white veterinarian with a dog' — or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for," Raghavan said. "So what went wrong? In short, two things.
"First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive."
Google has not said when the feature will be rereleased.