Skip to main content

GPT-4o: What the latest ChatGPT update can do and when you can get it

OpenAI developer using GPT-4o.
OpenAI

GPT-4o is the latest and greatest large language model (LLM) AI released by OpenAI, and it brings with it heaps of new features for free and paid users alike. It’s a multimodal AI that enhances ChatGPT with faster responses, greater comprehension, and a number of new abilities that will continue to roll out in the weeks to come.

With increasing competition from Meta’s Llama 3 and Google’s Gemini, OpenAI’s latest release is looking to stay ahead of the game. Here’s why it’s so exciting.

Availability and price

If you’ve been using the free version of ChatGPT for a while and jealously eyed the features that ChatGPT Plus users have been enjoying, there’s great news! You too can now play around with image detection, file uploads, find custom GPTs in the GPT Store, utilize Memory to retain your conversation as you chat so that you don’t need to repeat yourself, and analyze data and perform complicated calculations.

That’s all alongside the higher intelligence of the standard GPT-4 model, which GPT-4o is an equivalent of, even if it was trained from the ground up as a multimodal AI. The reason this is possible is because GPT-4o is computationally far cheaper to run, meaning it requires fewer tokens, which makes it more viable for a wider user base to enjoy it.

However, free users will have a limited number of messages they can send to GPT-4o per day. When that threshold is reached, you’ll be bumped over to the GPT-3.5 model.

It’s way faster

OpenAI's Mira Murati introduces GPT-4o.
OpenAI

GPT-4 was distinct from GPT-3.5 in a number of ways, and speed was one of them. GPT-4 was just way, way slower, even with its advances in recent months and the introduction of GPT-4 Turbo. However, GPT-4o is almost instantaneous. That makes its text responses far swifter and more actionable, with voice conversations occurring in closer to real- time.

While response speed feels like more of a nice-to-have feature than a game-changing one, the fact that you can get responses in near real time makes GPT-4o a much more viable tool for tasks like translation and conversational help.

Advanced voice support

Although upon its initial debut, GPT-4o is only able to work with text and images, it’s been built from the ground up to utilize voice commands and to be able to interact with users using audio. That means that where GPT-4 could take a voice, convert it into text, respond to that, and then convert its text response to a voice output, GPT-4o can hear a voice, and respond in kind. With its improved speed, it can respond far more conversationally, and can understand unique aspects of voice like tone, pace, mood, and more.

GPT-4o can laugh, be sarcastic, catch itself when making a mistake, and adjust midstream, and you can interrupt it conversationally without that derailing its response. It can also understand different languages and translate on the fly, making it usable as a real-time translation tool. It can sing — or even duet with itself.

Two GPT-4os interacting and singing

This could be used for interview prep, singing coaching, running role-playing NPCs, telling dramatic bedtime stories with different voices and characters, creating voiced dialogue for a game project, telling jokes (and laughing in response to yours), and so much more.

Improved comprehension

GPT-4o understands you much better than its predecessors did, especially if you speak to it. It can read tone and intention far better, and if you want it to be relaxed and friendly, it’ll joke with you in an attempt to keep the conversation light.

When it’s analyzing code or text, it’ll take your intentions into consideration far more, making it better at giving you the response you want and requiring less-specific prompting. It’s better at reading video and images, making it capable of understanding the world around it.

Live demo of GPT-4o vision capabilities

In several demos, OpenAI showed users filming the room they’re in, with GPT-4o models then describing it. In one video, the AI even described the room space to another version of itself, which then had its own responses based on that description.

Native macOS desktop app

The ChatGPT desktop app open in a window next to some code.
OpenAI

Native AI in Windows is still restricted to the very limited Copilot (for now), but macOS users will soon be able to make full use of ChatGPT and its new GPT-4o model right from the desktop. With a new native desktop app, ChatGPT will be more readily available — and with a new user interface to boot — making it easier to use than ever before.

The app will be available for most ChatGPT Plus users in the coming days, and will be rolled out to free users in the coming weeks. A Windows version is promised for later this year.

It’s not all quite ready, yet

At the time of writing (November 2024), the only aspects of GPT-4o that are available to the public are the text and image modes. There’s no advanced voice support, no real-time video comprehension, and the macOS desktop app won’t be available to everyone for a few more days at least.

But it is all coming. These changes and other exciting upgrades for ChatGPT are just around the corner.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
ChatGPT already listens and speaks. Soon it may see as well
ChatGPT meets a dog

ChatGPT's Advanced Voice Mode, which allows users to converse with the chatbot in real time, could soon gain the gift of sight, according to code discovered in the platform's latest beta build. While OpenAI has not yet confirmed the specific release of the new feature, code in the ChatGPT v1.2024.317 beta build spotted by Android Authority suggests that the so-called "live camera" could be imminently forthcoming.

OpenAI had first shown off Advanced Voice Mode's vision capabilities for ChatGPT in May, when the feature was first launched in alpha. During a demo posted at the time, the system was able to identify that it was looking at a dog through the phone's camera feed, identify the dog based on past interactions, recognize the dog's ball, and associate the dog's relationship to the ball (i.e. playing fetch).

Read more
This massive upgrade to ChatGPT is coming in January — and it’s not GPT-5
ChatGPT on a laptop

OpenAI is set to launch a new AI agent in January, code-named Operator, that will enable ChatGPT to take action on the user's behalf. You may never have to book your own flights ever again.

The company's leadership made the announcement during a staff meeting Wednesday, reports Bloomberg. The company plans to roll out the new feature as a research preview through the company’s developer API.

Read more
Is AI already plateauing? New reporting suggests GPT-5 may be in trouble
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

OpenAI's next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it's been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a "far smaller" improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion "isn’t reliably better than its predecessor [GPT-4] in handling certain tasks," specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

Read more