Skip to main content

How you can try OpenAI’s new o1-preview model for yourself

Despite months of rumored development, OpenAI’s release of its Project Strawberry last week came as something of a surprise, with many analysts believing the model wouldn’t be ready for weeks at least, if not later in the fall.

The new o1-preview model, and its o1-mini counterpart, are already available for use and evaluation, here’s how to get access for yourself.

Recommended Videos

We're releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.

These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math. https://t.co/peKzzKX1bu

— OpenAI (@OpenAI) September 12, 2024

What is o1?

OpenAI has made no secret of its artificial general intelligence (AGI) aspirations, and Project Strawberry (now known as “o1”) is the company’s next step toward that goal. It’s the first in a new line of “reasoning” models, “designed to spend more time thinking before they respond,” per an OpenAI announcement post. That strategy enables the model to, “reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

The models reportedly reason in a human-like manner, allowing them to “refine their thinking process, try different strategies, and recognize their mistakes,” as they gain experience through training. According to OpenAI, o1-preview operates on par with Ph.D. students in physics, chemistry, and biology, and performs similarly on benchmark tests in those subjects. o1 is also adept at coding and math problems, scoring 83% in a International Mathematics Olympiad (IMO) qualifying exam where GPT-4o only scored 13% and reaching the 89th percentile in a Codeforces competition against human opponents.

here is o1, a series of our most capable and aligned models yet:https://t.co/yzZGNN8HvD

o1 is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it. pic.twitter.com/Qs1HoSDOz1

— Sam Altman (@sama) September 12, 2024

o1-mini is a lightweight version of the standard o1-preview model. It reportedly is 80% less expensive to operate than the larger iteration, making it especially capable in coding analysis and generation tasks.

Is o1-preview available to try?

Yes, the o1-preview models launched on September 12 for ChatGPT Plus and Teams subscribers. Enterprise and Educational users will have access at the start of the following week.

How secure is o1 against bad actors? 

Very, it would seem. OpenAI reportedly developed an entirely new safety training program that leverages the model’s increased reasoning capabilities to make it more efficiently adhere to its safety and alignment guidelines. The company notes that in testing, where GPT-4o scored a 22 (out of 100) in resisting jailbreak attempts, the new o1 model scored an 84.

How do I get access to o1-preview?

As with all new generative AI features, the newly released o1-preview is currently only available to paying subscribers. If you want to try it for yourself, you’ll need to pick up a $20/month Plus subscription. Simply click on the Upgrade Plan radio button in the bottom of the left-hand navigation pane and follow the onscreen prompts to enter your payment details.

Once your subscription is activated, select either o1-preview or o1-mini from the model picker toggle on the left side of the ChatGPT homepage. Note that access is limited, even for paying users, with a weekly rate limit of 30 messages for o1-preview and 50 messages for o1-mini. OpenAI says it will eventually make o1-mini available for free tier users, though the company has yet to set a date for that roll out.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT already listens and speaks. Soon it may see as well
ChatGPT meets a dog

ChatGPT's Advanced Voice Mode, which allows users to converse with the chatbot in real time, could soon gain the gift of sight, according to code discovered in the platform's latest beta build. While OpenAI has not yet confirmed the specific release of the new feature, code in the ChatGPT v1.2024.317 beta build spotted by Android Authority suggests that the so-called "live camera" could be imminently forthcoming.

OpenAI had first shown off Advanced Voice Mode's vision capabilities for ChatGPT in May, when the feature was first launched in alpha. During a demo posted at the time, the system was able to identify that it was looking at a dog through the phone's camera feed, identify the dog based on past interactions, recognize the dog's ball, and associate the dog's relationship to the ball (i.e. playing fetch).

Read more
This massive upgrade to ChatGPT is coming in January — and it’s not GPT-5
ChatGPT on a laptop

OpenAI is set to launch a new AI agent in January, code-named Operator, that will enable ChatGPT to take action on the user's behalf. You may never have to book your own flights ever again.

The company's leadership made the announcement during a staff meeting Wednesday, reports Bloomberg. The company plans to roll out the new feature as a research preview through the company’s developer API.

Read more
Is AI already plateauing? New reporting suggests GPT-5 may be in trouble
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

OpenAI's next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it's been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a "far smaller" improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion "isn’t reliably better than its predecessor [GPT-4] in handling certain tasks," specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

Read more