OpenAI Launches New, Advanced GPT-4o Model: Soon Available For Everyone

OpenAI Launches New, Advanced GPT-4o Model: Soon Available For Everyone

Reading time: 3 min

OpenAI revealed a new, more advanced ChatGPT model called GPT-4o during the live OpenAI Spring Update event Monday night. The new GPT-4o is faster than GPT-4 and can simultaneously process text, audio, and images—it essentially can “see” through a device’s camera.

During the 26-minute-long live stream, OpenAI’s team showed in real-time the new features and capabilities of the new model. They also shared details about coming updates for previous versions, like the new desktop app.

“The big news today is that we are launching our new flagship model and we are calling it ChatGPT-4o,” said Mira Murati, Chief Technology Officer of OpenAI.

Murati explained that the new model and the new features for previous versions will be soon available for free and for everyone, including non-registered users. “We’ll be rolling them out over the next few weeks,” said Murati.

During the first demonstration, OpenAI’s team showed how GPT-4o can hold conversations using a female voice, be interrupted, answer fast and creatively—with a sense of humor—and even recognize emotions and give advice.

Later, the team presented different scenarios where GPT-4o can be used. They show how it can use the “vision” feature to help solve linear equations written on paper, understand code and graphics by sharing the screen through the desktop version, and assist in simultaneous translations in real-life conversations. GPT-4o can speak in multiple languages and different tones and styles of voice.

Finally, the experts showed how it can recognize human emotions, and also make mistakes. Throughout the demos, GPT-4o glitched a few times, confusing one of the presenters with a wood image and believing that the math problem was already handled when the team member hadn’t shown anything yet.

But GPT-4o quickly apologized and charmed the audience with its responses. “Oh, stop it, you are making me blush!” it replied when one of the research leaders, Barret Zoph, told GPT-4o how much they admired it.

Response To GPT-4o

OpenAI shared multiple short videos on X to demonstrate the different functions of the new GPT-4o model.

Several people on X associated it with the movie Her, where the protagonist Theodore Twombly, played by Joaquin Phoenix, falls in love with an AI virtual assistant voiced by Scarlett Johanson. “This is insane, when I saw these clips, I immediately remembered this movie!” commented one user. “This is literally Scarlett Johansson. How?!” wrote another.

While most users were impressed by the new features, others shared a few criticisms and concerns. “Dear @openai, your demo was super impressive and scary but the Italian accent was terrible :),” posted Massimo Banzi, co-founder of the robotic automation Arduino project.

“Really excellent but I would say that I think you may lose a lot of paid users if it’s only the limit cap that is different to the free version,” said another user.

In the following days, the new model and updates will be available to more people to test and experience the technology first hand.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
5.00 Voted by 1 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...