Microsoft has been using Copilot in Bing search results for more than a year. Since its inception, it has proven to be an excellent tool for everyday use since it not only offers an answer to any question we ask, but also shows the links from where the information was obtained in case we want to expand it.
The integration of Copilot into Bing surprised locals and strangers, including Google, which has proven to be a step far behind Microsoft in all aspects related to AI. Google announced earlier this month the new functions of its generative AI, Gemini, one of which is related to integration into search results and which, for now, is only available in the United States in trial mode.
And they have done well to do so since the results offered by this integration leave much to be desired, since they release all kinds of barbaritiessomething common in AI when the parameters are not established. suitable filters or when things are rushed and rushed, as Google seems to be doing since AI has become a priority in the world of technology.
Lack of supervision
This new Google function works in the same way as the one integrated into bing, answering the question based on the information you have collected from the Internet. The problem is that it seems that it has become a habit not to monitor or put into context the content it offers as a result. An example is found in the question of how to make the cheese stick to the pizza. The solution he proposes is to use non-toxic glue, a response that a user posted on Reddit to answer that same question.
Another example is found in the answer offered to whether it is advisable to leave a dog locked in a car that is hot (it is not specified if it is due to the sun or the ambient temperature). The answer from Google’s AI is yes. In this case, the answer is extracted from a song by the Beatlesspecifically the song It’s Okay to Leave a Dog in a Hot Car.
It’s not the first time
When Google launched its AI chatbot, it faced issues related to security and ethics, values it completely lacked. Earlier this year, Google was forced to temporarily disable Bard imaging by offering historically conflicting results and that they didn’t make any sense.
“Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis” . It calls it “inaccuracies in some historical image generation depictions.” But that’s only part of the issue. Please, stop making a joke out of AI ethics, fundamental rights impact… https://t.co/RM70kD1qGB https://t.co/93r5iQXiPe
May 27, 2024 • 17:31
Seeing the large number of errors that Google’s AI is presenting, it shows, once again, that they are doing everything possible to follow Microsoft anyway, as if the AI is really capable of thinking, when it really isn’t. Any AI bases its responses based on the content with which it has been trained.
If the information used is erroneous, and it is not able to contrast it with other information, it will always offer answers that are far from reality. While OpenAI’s AI continues to advance by leaps and bounds, Google’s AI still has a very long way to go if they really want to become a benchmark around the world, something that, at this point, seems impossible.
Google defends itself
As expected, Google has released a statement stating that they are working to improve the experience in searches, so there is a possibility that some information displayed may not be entirely correct. According to them, they are taking note of these and other examples that have been published on the Internet to prevent it from happening again in the future.