Has Google really done its homework?
Did Nazi This Coming
In February, Google’s Gemini-powered AI image generator made headlines for all the wrong reasons.
At the time, the tech giant was forced to apologize after the tool gladly generated images of racially diverse Nazi-era German soldiers, seemingly overcorrecting for the tech’s ongoing racial bias problems.
“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google wrote in a statement at the time. “We’re working to improve these kinds of depictions immediately.”
But despite its promises, those changes seemingly took months to correct, with Google first failing to implement effective guardrails, and then shutting the feature down entirely, admitting that it had “got it wrong.“
Now, Google has announced that its AI image generator will come back online — and we can’t wait to poke around and find its potential flaws this time around.
Imagen Heap
In a blog post, Gemini Experiences senior director Dave Citron promised that “we’ve upgraded our creative image generation capabilities” with a new model dubbed Imagen 3. The model comes with “built-in safeguards” and allegedly adheres “to our product design principles.”
As detailed in a not-yet-peer-reviewed paper, Google DeepMind researchers used a “multi-stage filtering process” to “ensure quality and safety standards” with Imagen 3.
“This process begins by removing unsafe, violent, or low-quality images,” the paper reads. ” We then eliminate AI-generated images to prevent the model from learning artifacts or biases commonly found in such images.”
Researchers used “safety datasets” to ensure that the model won’t generate explicit, violent, hateful, or oversexualized images.
“We don’t support the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes,” Google wrote in its blog post.
But has the company really done its homework this time around? It remains to be seen whether Google’s Imagen 3 will spit out images of racially diverse Nazis or horrific clowns.
“Of course, as with any generative AI tool, not every image Gemini creates will be perfect,” Citron wrote, “but we’ll continue to listen to feedback from early users as we keep improving.”
More on Gemini AI: Google Blocked Gemini From Generating Images of Humans, But It Still Does Clowns
Source link
lol