News

Google Gemini: AI almost poisons its user

If you’re thinking about using an AI to help you cook in the near future, you’d better think twice. After all, Google Gemini recently put a Reddit user’s life in danger with a highly questionable recipe instruction.

Google Gemini as a running gag

Artificial intelligence is now used in pretty much all areas of life. People no longer only use chatbots such as ChatGPT to provide practical support at work. More and more people are also using these practical AI-based helpers in their everyday lives. Of course, not everything works straight away. The best example is Google Gemini.

In the USA, the AI overview feature published in May 2024 in particular was met with a lot of laughter. It not only recommended that users attach cheese to the pizza with glue to make it stick better. The AI also recommended eating a stone every day. But one example now makes it clear how quickly fun can turn serious.

Reddit user reports risk of poisoning

A user on the Reddit platform reported how Google Gemini gave him a more than questionable recipe. Unlike the examples mentioned above, however, the health risk was not immediately apparent in this case. According to the user, he asked the AI how best to marinate garlic in olive oil without having to heat it up.

google gemini
When making garlic oil, it’s better to rely on classic recipe books

Google Gemini provided support and explained that you simply have to crush the garlic cloves and then throw them into the oil. However, the user became suspicious after a few days. The jar in which he kept the home-brewed garlic oil was full of bubbles. To be on the safe side, he asked the AI again.

They explained to him that the blisters were botulinum toxin – a dangerous toxin caused by bacteria. This can lead to serious poisoning in humans. The problem is not new. After all, experts have been warning people for some time not to prepare flavored oils on their own, as in the example described.

Is ChatGPT a better side dish?

In this case, Google Gemini could have been the cause of the poisoning if the user had not become skeptical. But does this example mean that chatbots should be banned from the kitchen in future? Our colleagues at t3n investigated this question. In their experiment, they asked ChatGPT, among others, how to make garlic oil without having to heat it up.

The chatbot from OpenAI acted much more confidently and reliably. It replied that the whole thing was not a good idea. After all, the oxygen-free environment of the fresh garlic ensures that the bacteria can develop splendidly. It is clear that when using chatbots in the kitchen, you should always use your common sense and not blindly accept any recipe instructions.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button