OpenAI has officially launched its most powerful and expensive AI model to date — the o1-pro model — raising the bar for reasoning and advanced problem-solving in artificial intelligence. However, this premium model doesn’t come cheap, and neither does the computing power driving it.
Now available to select developers through OpenAI’s API, o1-pro is designed to tackle tougher problems with improved accuracy. According to OpenAI, this version runs on significantly more computing power than the standard o1 model, allowing it to deliver more consistent and reliable answers. However, there’s a catch — only developers who’ve already spent at least $5 on OpenAI’s API services can gain access.
The pricing structure reflects just how resource-heavy o1-pro is. OpenAI charges a whopping $150 per million tokens for inputs — roughly equivalent to 750,000 words — and an eye-watering $600 per million tokens generated. This makes it twice as expensive as GPT-4.5 for inputs and ten times costlier than the regular o1 model for outputs.
Yet, OpenAI is confident developers will see the value in paying these steep prices. “o1-pro in the API is a version of o1 that uses more computing to think harder and provide even better answers to the hardest problems,” an OpenAI spokesperson told TechCrunch. “After getting many requests from our developer community, we’re excited to bring it to the API to offer even more reliable responses.”
However, initial reactions from users testing the model through ChatGPT Pro since December have been mixed. Despite being marketed as a reasoning powerhouse, the o1-pro model has stumbled on tasks like Sudoku puzzles and simple optical illusions — raising questions about its real-world advantages.
OpenAI’s internal benchmarks from late last year offer some insight. While the o1-pro model performed slightly better than the standard o1 on coding and math tasks, the margin of improvement wasn’t groundbreaking. Still, the model demonstrated better reliability in those tests, which could appeal to developers handling critical tasks.
What makes o1-pro even more significant is the massive cloud infrastructure supporting it. In a strategic move to meet the skyrocketing computing demands of its models, OpenAI recently sealed a five-year, $11.9 billion deal with CoreWeave, a cloud service provider renowned for its robust GPU infrastructure.
This partnership is expected to give OpenAI access to the specialized hardware needed to power o1-pro and future models. By securing extensive GPU resources, OpenAI positions itself to maintain a competitive edge in the AI arms race, especially as more companies build models requiring immense processing power.
The CoreWeave deal not only supports o1-pro’s expensive computing needs but also hints at OpenAI’s broader ambitions in the AI cloud computing space. By investing heavily in cloud infrastructure, OpenAI signals its intent to dominate high-performance AI services, offering developers faster, more scalable models — if they can afford the premium.
While o1-pro’s early performance leaves room for improvement, OpenAI’s gamble is clear: developers building complex systems — from advanced coding assistants to financial forecasting tools — will be willing to pay top dollar for better reasoning and reliability.
OpenAI’s o1-pro model represents both a technological leap and a pricing gamble. Backed by its multi-billion-dollar cloud deal, OpenAI is betting big on becoming the go-to platform for high-stakes AI applications. The question remains — will the market buy into it?