Google Gemini Launches New Models And Features To Compete With ChatGPT

Google Gemini Launches New Models And Features To Compete With ChatGPT

Reading time: 3 min

Google launched its most efficient AI model, Gemini 1.5 Flash, and a new AI agent, Project Astra, on Tuesday at Google I/O, the company’s annual developer conference. During the two-hour event, Google’s team explained all the functions of the new models and the added features for current services and devices, all based on AI.

The new Gemini 1.5 Flash will be faster, cheaper, and more efficient than the previous model, 1.5 Pro. According to Google, 1.5 Flash can process large amounts of data, summarize conversations, and caption videos and images quickly. Both 1.5 Flash and 1.5 Pro will be available for users of Vertex AI and Google AI Studio.

Project Astra, on the other hand, was described as an “advanced seeing and talking responsive agent.” This new AI agent can process multimodal information, understand context, and interact with humans.

Google shared a two-minute demo to show how the AI assistant can work. In the video, a Google worker in London uses her smartphone’s camera to ask Project Astra to describe her surroundings, describe the code her coworker is working on in the office, recognize her geographical location, and come up with a creative name for a band. The AI agent answers correctly and creatively in what sounds like a “natural” conversation.

Project Astra even remembers where the worker left her glasses. However, these are not regular glasses but what looks like Google Glasses that integrate the AI assistant. Google didn’t provide details about the glasses, only hinting that they will be a part of a future development.

After the video presentation, Demis Hassabis, CEO of Google’s DeepMind, said, “It’s easy to envision a future where you can have an expert assistant by your side, through your phone or new exciting form factors like glasses.”

Project Astra was announced only a few hours after OpenAI launched GPT-4o, their advanced version of ChatGPT. Both AI products contain similar features: real-time conversations, “vision” through devices, and simultaneous integration and processing of text, images, and audio.

Users on X have already started comparing the two virtual assistants, highlighting the advantages and disadvantages of both versions. “Astra has slightly longer latency,” said one user, and “strong text-to-speech, but hasn’t shown as much emotional range as GPT4o.”

Project Astra is still in an early stage, and Google expects to release it later this year, while GPT-4o will be available for all users within the next few weeks.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...