Google Accused of Anti-White Bias After AI Tool Shows Racially Diverse Founding Fathers And Nazi Soldiers

281
 Sundar Pichai
Google CEO Sundar Pichai said recent “problematic” text and image responses from the company’s Gemini artificial intelligence (AI) chatbot were “completely unacceptable.” File photo: Photosince, ShutterStock.com, licensed.

SANTA CLARA CA – Google CEO Sundar Pichai has addressed the recent controversy surrounding the company’s AI app, Gemini, acknowledging the “unacceptable responses” it generated around race and vowing to make structural changes to rectify the issue.

Last week, Google suspended Gemini after it produced offensive and embarrassing results. The app declined to depict white people in some cases, instead inserting images of women or people of color when prompted to create images of Vikings, Nazis, and the Pope. Furthermore, Gemini generated problematic text responses, including equating Elon Musk’s influence with that of Adolf Hitler.

These developments sparked criticism, particularly from conservatives, who accused Google of an anti-white bias. Many companies offering AI tools, such as OpenAI, have faced criticism for creating predominantly white images in professional roles and depicting Black individuals in stereotypical roles.

Pichai acknowledged the offense caused by Gemini’s responses and vowed to address the issues. He stated that progress has already been made in improving the app’s guardrails, with substantial enhancements visible across a wide range of prompts.

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.

Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.

We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.

– Google CEO Google CEO Sundar Pichai – Employee Memo

The Gemini controversy has provided ammunition for right-wing critics who often accuse tech companies of liberal bias. However, the problem lies in technical errors made during the fine-tuning of Google’s AI models, rather than inherent bias within the models themselves. The issue lies with the software guardrails that govern the models.

This is a challenge faced by all companies developing consumer AI products, not just Google. The rapid pace of progress in the generative AI field has compelled companies like Google to speed up product development, leading to instances where errors occur.

It is important to note that nobody at Google intended for Gemini to produce inappropriate depictions or make morally equivalent comparisons. The aim was to reduce bias, but the attempt went wrong.

Pichai’s note to staff demonstrates that Google is actively working to address the technical problems associated with Gemini. However, the reputational damage caused by this incident may prove more difficult to repair.


Get great news content for your website. Search engines love sites with frequently updated quality content and reward them with better search rankings. Get High Quality Content Updates for your site.
Comment via Facebook

Corrections: If you are aware of an inaccuracy or would like to report a correction, we would like to know about it. Please consider sending an email to corrections@longislandguide.com and cite any sources if available. Thank you. (Policy)