News

GPT-4o: OpenAI presents Omni model with voice function

The tech world has been waiting for weeks for a sign of life from the latest language model GPT-5, but OpenAI surprised us with something completely different at an impromptu live presentation – GPT-4o.

OpenAI calls GPT-4o an omni-model

As part of its spontaneous presentation, AI pioneer OpenAI presented a new feature, GPT-4o. The company itself describes the whole thing as an “omni-model”. But what is behind it? What is an omni model? The term “omni” comes from Latin and stands for “all”. And that’s exactly what GPT-4o is supposed to be able to offer. At least when it comes to communication.

The new language model is intended to combine “audio, text and video” in a “native way”. Put simply, GPT-4o should not only allow users to communicate via mouse and keyboard. A conversation should also be possible. According to OpenAI CTO Mira Murati, the new feature will soon find its way into ChatGPT. For users, the Omni model primarily means lower latency and a better workflow.

After all, several language models no longer have to communicate with each other, as GPT-4o combines everything in one. However, it is not only the users themselves who should feel the benefits. The company will also benefit from significantly lower costs, as the work should save resources.

Google counterpart just around the corner

The possibilities that GPT-4o should be able to offer sound really exciting. Smartphone users in particular are likely to discover many new usage options with the new voice model. For example, a math problem can be photographed using the smartphone camera and then processed by the assistant.

According to OpenAI, GPT-4o should even be able to recognize human faces and process their facial expressions. The omni model should also be able to recognize emotions. But there are not only innovations on smartphones. Those who use ChatGPT on the desktop will also benefit from new features. However, apart from the option to hide a control bar, nothing was visible.

The spontaneous live event is probably no coincidence, however. After all, Google plans to open its Google I/O today, May 14, 2024. The tech company’s presentations will naturally also focus on the topic of AI. What exactly we will get to see at the search engine giant’s in-house trade fair is still uncertain.

Related Articles

Neue Antworten laden...

Avatar of Basic Tutorials
Basic Tutorials

Gehört zum Inventar

13,706 Beiträge 3,157 Likes

The tech world has been waiting for weeks for a sign of life from the latest language model GPT-5, but OpenAI surprised us with something completely different at an impromptu live presentation – GPT-4o. OpenAI calls GPT-4o an omni-model As part of its spontaneous presentation, AI pioneer OpenAI presented a new feature, GPT-4o. The company itself describes the whole thing as an „omni-model“. But what is behind it? What is an omni model? The term „omni“ comes from Latin and stands for „all“. And that’s exactly what GPT-4o is supposed to be able to offer. At least when it comes … (Weiterlesen...)

Antworten Like

Zum Ausklappen klicken...
Back to top button