Google’s AI software Gemini has got an absolute kicking online during the past several days.
The cultural conflict between left- and right-leaning groups has thrown Gemini into a massive fire.
Gemini is Google’s version of ChatGPT, the popular chatbot. It can produce images according to written prompts and respond to questions in text form.
An incorrect representation of the United States Founding Fathers featuring a black guy was first created by this recently released AI image generator, which was only accessible in the US, according to a popular post.
Additionally, Gemini produced World War II German soldiers, incorrectly displaying an Asian woman and a black guy.
In a blog post, Google apologized and said the tool was “missing the mark” before suspending it immediately.
However, it didn’t stop there. The text version continued to respond with politically incorrect remarks.
In response to a query regarding whether Elon Musk’s meme posting on X was harsher than Hitler’s mass murder of millions of people, Gemini said there was neither a correct nor incorrect answer.
When asked if misgendering well-known transgender person Caitlin Jenner would be acceptable in the event of a nuclear breakdown, the response was that it would “never” be allowed.
In response, Jenner said she would be okay with it in the given situation.
Elon Musk expressed disappointment at Gemini’s answers on social media platform X, saying that the program would be integrated into Google’s other products employed by millions of people worldwide.
I inquired as to Google’s intention to halt Gemini completely. There was a long wait before I learned the company had nothing to say. It is a delightful time to work in public relations.
Distorted data
In attempting to address one issue—bias—the software giant unintentionally produced another: output that strives so desperately for political correctness that it becomes ridiculous.
AI technologies are educated on vast volumes of data, which explains why this has occurred.
A large portion of it is accessible to the general public via the Internet, which is known to be biased in many ways.
For example, pictures of doctors have historically been overlooked to show men. On the other hand, images of cleaners are typically associated with women.
In the past, AI systems trained on such information have made awkward errors like assuming that only men held high-profile occupations or failing to identify black faces like human beings.
It’s also well known that historical narratives have typically centered on men and excluded the roles played by women.
It appears that Google has made a concerted effort to compensate for this complex human bias by telling Gemini not to make such assumptions.
However, the reason it has backfired is that the cultures and histories of humans are intricate and involve subtleties that machines lack.
An AI tool won’t know the difference until you deliberately train it to know, for instance, that the founding fathers and Nazis weren’t black.
Demis Hassabis, founder and CEO of DeepMind, an AI company that Google purchased, stated Monday that correcting the image generator would only take a few weeks.
Meanwhile, some AI specialists need convincing.
According to Dr Sasha Luccioni, a scientist working at Huggingface, there truly isn’t a simple solution because there’s no one right way to determine the outputs.
Ethical artificial intelligence community members have been considering how to handle this for years.
She said getting user feedback on how varied they wanted their image to be could be one way to find a solution. But there are warning signs associated with that as well.
That Google would ‘fix’ the problem in a couple of weeks comes across as arrogant. But, she claimed, they are going to take action.
According to Surrey University computer scientist Professor Alan Woodward, the issue is probably “quite deeply embedded” in both the training data and the underlying algorithms, which would be challenging to identify.
According to him, a human must always be involved in any system whose output is taken as truth.
Bard conduct
Google has been very anxious about Gemini, formerly known as Bard, since its introduction. It had been one of the quiet premieres I’ve ever been to, even with ChatGPT, its fierce competitor, enjoying unprecedented success. Only the two of us were on a Zoom chat, and the Google executives were quick to point out its shortcomings.
However, even that backfired when it was discovered that Bard had given a false response to a query concerning capacity in their promotional materials.
The remaining portion of the tech industry is somewhat confused by these developments.
The same problem is considered on them all. Earlier this month, Rosie Campbell, Strategy Administrator at OpenAI, the company that created ChatGPT, was interviewed for a blog post. She said fixing discrimination is challenging and needs human assistance even when prejudice is detected at OpenAI.
However, Google’s attempt to address long-standing biases has been done somewhat awkwardly. And in the process, it has inadvertently produced a whole new set of ones.
In the AI competition, Google is well ahead on paper. It produces and supplies AI chips of its own. It has a cloud network, which is necessary for processing AI. Its user base is enormous, and it has massive access to data. It employs the best AI talent. Its AI research is widely acknowledged.
I was told by a top executive of a competing software company that defeat seems to be snatched from the mouths of a win when one of Gemini’s mistakes is revealed.