Vivek Ramaswamy Criticizes Google’s Gemini AI: Labels Rollout as Global Embarrassment, Alleges Blatant Racism

Indian-American entrepreneur and US presidential candidate Vivek Ramaswamy did not mince words when he criticized Google’s latest AI rollout, dubbed ‘Gemini,’ labeling it a “global embarrassment.” The controversy stemmed from the chatbot’s text-to-image generation feature, which faced backlash due to alleged inaccuracies in historical image depictions.

Ramaswamy, 38, went further to denounce the Google AI chatbot as “blatantly racist,” attributing the issue to what he described as the company’s cultivation of “broken incentives” among its employees.

Expressing his dismay on the social media platform X (formerly Twitter), Ramaswamy asserted, “The globally embarrassing rollout of Google’s LLM has proven that James Damore was 100% correct about Google’s descent into an ideological echo chamber. Employees working on Gemini surely realized it was a mistake to make it so blatantly racist, but they likely kept their mouths shut because they didn’t want to get fired like Damore.”

Ramaswamy further elaborated on his criticism by highlighting the role of corporate culture in perpetuating biases within AI development. “These companies program their employees with broken incentives, and those employees then program the AI with the same biases,” he added.

Elon Musk, CEO of Tesla and xAI, echoed similar sentiments, condemning Google’s AI chatbot in a post on X, stating, “I’m glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all.”

The comments from Ramaswamy and Musk underscored the growing scrutiny and concerns surrounding the development and deployment of AI technologies, particularly regarding issues of bias and ethical considerations.

Google Pauses Gemini AI Image Generation for Improvements

Earlier today, Google announced a temporary halt to the image generation capabilities of its Gemini AI chatbot, acknowledging its inaccuracies and pledging to introduce an enhanced version of the feature shortly.

In a statement on X, the company, headquartered in Mountain View, California, admitted the shortcomings of the AI image generation, stating, “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Furthermore, Google confirmed the suspension of Gemini’s image generation feature in another post, emphasizing their commitment to resolving the issue. “We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” the statement read.

Google’s Gemini AI Chatbot Controversy

Artificial intelligence (AI)–based image generators have long grappled with perpetuating racial and gender-based stereotypes. However, Google’s foray into addressing diversity in this realm has stirred significant controversy, with critics arguing that the tech giant may have gone too far.

Almost immediately following its launch, Google’s AI chatbot found itself embroiled in a heated debate over its image-generation feature. On social media platform X, users ignited a firestorm of criticism, branding the AI system “too woke” for its portrayal of historical figures. As discussions escalated, a recurring complaint emerged: the AI chatbot seemingly failed to acknowledge the existence of white individuals.

The issue deepened as users reported difficulties in prompting Gemini, the AI chatbot, to generate images featuring white people. Several users recounted frustrating experiences, with Gemini consistently providing images of “white black people” while outright refusing to produce depictions of happy white men.

This outcry underscored broader concerns about the biases ingrained within AI technologies and the implications of AI-driven image generation for perpetuating or challenging societal stereotypes. Google’s misstep in attempting to address diversity within its AI systems serves as a poignant reminder of the complexities and challenges inherent in navigating the intersection of technology and social equity.

You May Also Like

More From Author

+ There are no comments

Add yours