Skip to main content

Google’s new Gemini 2.0 AI model is about to be everywhere

Gemini 2.0 logo
Google DeepMind

Less than a year after debuting Gemini 1.5, Google’s DeepMind division was back Wednesday to reveal the AI’s next-generation model, Gemini 2.0. The new model offers native image and audio output, and “will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google’s new flagship AI model, you can expect to see it begin powering AI features across the company’s ecosystem in the coming months. As with OpenAI’s o1 model, the initial release of Gemini 2.0 is not the company’s full-fledged version, but rather a smaller, less capable “experimental preview” iteration that will be upgraded in Google Gemini in the coming months.

Recommended Videos

“Effectively,” Google DeepMind CEO Demis Hassabis told The Verge, “it’s as good as the current Pro model is. So you can think of it as one whole tier better, for the same cost efficiency and performance efficiency and speed. We’re really happy with that.”

Google is also releasing a lightweight version of the model, dubbed Gemini 2.0 Flash, for developers.

With the release of a more capable Gemini model, Google advances its AI agent agenda, which would see smaller, purpose-built models taking autonomous action on the user’s behalf. Gemini 2.o is expected to significantly boost Google’s efforts to roll out its Project Astra, which combines Gemini Live’s conversational abilities with real-time video and image analysis to provide users information about their surrounding environment through a smart glasses interface.

Google also announced on Wednesday the release of Project Mariner, the company’s answer to Anthropic’s Computer Control feature. This Chrome extension is capable of commanding a desktop computer, including keystrokes and mouse clicks, in the same way human users do. The company is also rolling out an AI coding assistant called Jules that can help developers find and improve clunky code, as well as a “Deep Research” feature that can generate detailed reports on the subjects you have it search the internet for.

Deep Research, which seems to serve the same function as Perplextiy AI and ChatGPT Search, is currently available to English-language Gemini Advanced subscribers. The system works by first generating a “multi step research plan,” which it submits to the user for approval before implementing.

Once you sign off on the plan, the research agent will conduct a search on the given subject and then hop down any relevant rabbit holes it finds. Once it’s done searching, the AI will regurgitate a report on what its found, including key findings and citation links to where it found its information. You can select it from the chatbot’s drop-down model selection menu at the top of the Gemini home page.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more
How to use Gemini AI to create presentations in Google Slides
a professional woman giving a slideshow presentation

The only thing people enjoy less than sitting through a slideshow presentation is making a slideshow presentation. But with the integration of Gemini AI into Google Slides, that process is about to get a whole lot easier.

In this guide, we'll explore everything you need to seamlessly incorporate Gemini AI into your workflow. Whether you're looking to enhance your design elements, streamline content generation, or simply save yourself some time, Gemini AI offers a suite of features that can transform the way you build your presentations.
How to integrate Gemini into Google Slides
As with the integrations for Docs and Sheets, Gemini AI is not available for use with Slides at the free tier. You'll need a $20/month subscription to the Google One AI Premium Plan to gain access; otherwise, a work or school account through a Gemini for Google Workspace add-on will work.

Read more
How to use Gemini AI to master Google Sheets
a laptop with a spreadsheet graph displayed

Applying AI in your spreadsheet workflows can save you a lot of time, and with Gemini AI integrated into Google Sheets, you can take your data management to the next level. In this guide, we'll walk you through everything you need to seamlessly integrate Gemini AI into Google Sheets.

Discover how Gemini can enhance your ability to analyze data, automate repetitive tasks, and optimize your entire spreadsheet experience, making your work more efficient and insightful than ever.
How to integrate Gemini into Google Sheets
Just as with adding Gemini functionality to Docs, Sheets, and the rest of the Workspace suite, you can't do it with the free tier. You'll have to subscribe to the $20-per-month Google One AI Premium Plan to gain access, or you can use a work or school account if it has the Gemini for Google Workspace add-on.

Read more