Skip to main content
We may receive compensation from affiliate partners for some links on this site. Read our full Disclosure here.

Google Takes Down Gemini AI Bot After It Refuses to Create Pictures of White People


Google has temporarily taken Gemini, it’s AI image generating bot, offline.

Why?

Because Gemini was exposed as being woke and historically inaccurate.

The AI bot appeared to be unable to create images with white people.

For example, when asked to create an image of a historical pope, the bot created an image of a black man and an Asian woman.

When asked to create images of Nazis, it included Japanese and black people in the mix.

Historically speaking, Nazi-Germany was mostly white, of course.

Such outputs confirmed the fear we’ve all had: That tech giants are coding wokeness into their algorithms.

ADVERTISEMENT

Don’t believe us?

Take a look at some of these outputs:

Well, now Gemini is offline.

Google is claiming that they are fixing the AI bot.

But this raises the question:

ADVERTISEMENT

Why was Gemini so inaccurate in its historic and “realistic” portrayals?

Why did it feel the need to make every person it generated a person of color, but not a white person?

Fox Business confirms that Google has paused Gemini image generation:

Google will pause the image generation feature of its artificial intelligence (AI) tool, Gemini, after the model refused to create images of White people, Reuters reported.

The Alphabet-owned company apologized Wednesday after users on social media flagged that Gemini’s image generator was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google had said on Wednesday.

Gemini, formerly known as Google Bard, is one of many multimodal large language models (LLMs) currently available to the public. As is the case with all LLMs, the human-like responses offered by these AIs can change from user to user. Based on contextual information, the language and tone of the prompter, and training data used to create the AI responses, each answer can be different even if the question is the same.

I don’t know about you, but this also makes me wonder about the bias in Google’s search algorithm.

If its image generation bot was so biased, what else don’t we know about their products?

ADVERTISEMENT

As AI advances, this sort of bias is very alarming.

Google claims that this only happened because they were trying to prevent future problems.

But apparently, the AI dug itself in a hole.

Here’s the thing though:

Aren’t products tested before they’re released?

It’s unlikely that the AI generator suddenly started pushing such outputs.

What’s more likely is that the bot was reviewed and approved… and that it wasn’t until external blowback that the company even realized something was wrong.

NBC News has more details on Google’s explanation:

Google acknowledged Wednesday that Gemini was offering inaccuracies and a day later it paused image generations that include people.

Google said Friday the intent had been to avoid falling into “some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images.” It also said that Gemini is targeted to a worldwide audience, so the diversity of people depicted is important.

But prompts for a specific type of person or people in a particular historical context “should absolutely get a response that accurately reflects what you ask for,” Google said.

“Over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive,” the company said.

Google did not give a timeline for turning back on the ability to generate images of people, and it said the process of building a fix “will include extensive testing.”

So, what do you think?

Do you think Google is telling the truth and that this is just an unfortunate incident?

Or do you think this reverse racism is a feature rather than a glitch?

ADVERTISEMENT

Let us know your thoughts in the comments below!



 

Join the conversation!

Please share your thoughts about this article below. We value your opinions, and would love to see you add to the discussion!

Leave a comment
Thanks for sharing!