🌐
Vellum
vellum.ai › blog › analysis-openai-o1-vs-deepseek-r1
Analysis: OpenAI o1 vs DeepSeek R1
February 3, 2025 - OpenAI’s o1 model is nearly 2x ... can vary a lot depending on the task and context. DeepSeek is 27x cheaper for input tokens than OpenAI and 58x cheaper when cached....
🌐
Prompt Hub
prompthub.us › blog › deepseek-r-1-model-overview-and-how-it-ranks-against-openais-o1
PromptHub Blog: DeepSeek R-1 Model Overview and How it Ranks Against OpenAI's o1
They started in 2023, but have been making waves over the past month or so, and especially this past week with the release of their two latest reasoning models: DeepSeek-R1-Zero and the more advanced DeepSeek-R1, also known as DeepSeek Reasoner. They’ve released not only the models but also the code and evaluation prompts for public use, along with a detailed paper outlining their approach. Aside from creating 2 highly performant models that are on par with OpenAI’s o1 model, the paper has a lot of valuable information around reinforcement learning, chain of thought reasoning, prompt engineering with reasoning models, and more.
People also ask

Why Is DeepSeek-R1 So Good?
DeepSeek-R1 excels due to its efficient training methods and open-source approach. It matches the performance of leading models like OpenAI's o1 in tasks such as mathematics, coding, and reasoning, all achieved at a fraction of the development cost.
🌐
leanware.co
leanware.co › insights › deepseek-r1-vs-gpt-o1
DeepSeek vs GPT o1 | Features, Performance, & Pricing
What Makes DeepSeek Different?
DeepSeek distinguishes itself by developing high-performing AI models without relying on the most advanced chips. This approach challenges the prevailing dependence on high-end hardware in AI development.
🌐
leanware.co
leanware.co › insights › deepseek-r1-vs-gpt-o1
DeepSeek vs GPT o1 | Features, Performance, & Pricing
How Is DeepSeek Better?
DeepSeek's R1 model offers comparable performance to leading AI models while being more cost-effective. Its open-source nature allows for widespread use and customization, promoting innovation and accessibility in the AI community.
🌐
leanware.co
leanware.co › insights › deepseek-r1-vs-gpt-o1
DeepSeek vs GPT o1 | Features, Performance, & Pricing
🌐
DataCamp
datacamp.com › blog › deepseek-r1
DeepSeek-R1: Features, o1 Comparison, Distilled Models & More | DataCamp
June 4, 2025 - On MATH-500, DeepSeek-R1 takes the lead with an impressive 97.3%, slightly surpassing OpenAI o1-1217 at 96.4%. This benchmark tests models on diverse high-school-level mathematical problems requiring detailed reasoning.
🌐
Leanware
leanware.co › insights › deepseek-r1-vs-gpt-o1
DeepSeek vs GPT o1 | Features, Performance, & Pricing
Custom Software & Mobile App Dev for E-Commerce Company
DeepSeek-R1 and OpenAI’s GPT-o1 take different paths in AI reasoning, making their comparison essential. DeepSeek-R1, an open-source model, relies on reinforcement learning-first training, while GPT-o1 follows a structured, step-by-step problem-solving approach within a proprietary system. Leanware has effectively delivered all the tasks within reasonable timelines. They've followed the Scrum methodology for excellent project management. They are also highly collaborative and responsive to the client's needs. Their dedication, efficiency, and collaborative approach are commendable. Compare Dee
Rating: 5 ​
🌐
Galileo AI
galileo.ai › blog › deepseek r1 or openai o1? open source disruption meets proprietary power
DeepSeek R1 vs OpenAI O1: Which AI Model Should You Choose? | Galileo
August 1, 2025 - But they follow opposite philosophies. R1 pushes the open-source frontier with community-auditable code and self-hostable weights, while O1 doubles down on a polished, fully managed API.
🌐
Reddit
reddit.com › r/localllama › notes on deepseek r1: just how good it is compared to openai o1
r/LocalLLaMA on Reddit: Notes on Deepseek r1: Just how good it is compared to OpenAI o1
October 20, 2024 -

Finally, there is a model worthy of the hype it has been getting since Claude 3.6 Sonnet. Deepseek has released something anyone hardly expected: a reasoning model on par with OpenAI’s o1 within a month of the v3 release, with an MIT license and 1/20th of o1’s cost.

This is easily the best release since GPT-4. It's wild; the general public seems excited about this, while the big AI labs are probably scrambling. It feels like things are about to speed up in the AI world. And it's all thanks to this new DeepSeek-R1 model and how they trained it. 

Some key details from the paper

  • Pure RL (GRPO) on v3-base to get r1-zero. (No Monte-Carlo Tree Search or Process Reward Modelling)

  • The model uses “Aha moments” as pivot tokens to reflect and reevaluate answers during CoT.

  • To overcome r1-zero’s readability issues, v3 was SFTd on cold start data.

  • Distillation works, small models like Qwen and Llama trained over r1 generated data show significant improvements.

Here’s an overall r0 pipeline

  • v3 base + RL (GRPO) → r1-zero

r1 training pipeline.

  1. DeepSeek-V3 Base + SFT (Cold Start Data) → Checkpoint 1

  2. Checkpoint 1 + RL (GRPO + Language Consistency) → Checkpoint 2

  3. Checkpoint 2 used to Generate Data (Rejection Sampling)

  4. DeepSeek-V3 Base + SFT (Generated Data + Other Data) → Checkpoint 3

  5. Checkpoint 3 + RL (Reasoning + Preference Rewards) → DeepSeek-R1

We know the benchmarks, but just how good is it?

Deepseek r1 vs OpenAI o1.

So, for this, I tested r1 and o1 side by side on complex reasoning, math, coding, and creative writing problems. These are the questions that o1 solved only or by none before.

Here’s what I found:

  • For reasoning, it is much better than any previous SOTA model until o1. It is better than o1-preview but a notch below o1. This is also shown in the ARC AGI bench.

  • Mathematics: It's also the same for mathematics; r1 is a killer, but o1 is better.

  • Coding: I didn’t get to play much, but on first look, it’s up there with o1, and the fact that it costs 20x less makes it the practical winner.

  • Writing: This is where R1 takes the lead. It gives the same vibes as early Opus. It’s free, less censored, has much more personality, is easy to steer, and is very creative compared to the rest, even o1-pro.

What interested me was how free the model sounded and thought traces were, akin to human internal monologue. Perhaps this is because of the less stringent RLHF, unlike US models.

The fact that you can get r1 from v3 via pure RL was the most surprising.

For in-depth analysis, commentary, and remarks on the Deepseek r1, check out this blog post: Notes on Deepseek r1

What are your experiences with the new Deepseek r1? Did you find the model useful for your use cases?

🌐
iSpeech
zignuts.com › blog › deepseek-r1-vs-openai-o1-comparison
DeepSeek R1 vs OpenAI O1: AI Model Comparison (2025)
OpenAI o1 API: $15 per million input tokens and $60 per million output tokens. This significant cost difference makes DeepSeek-R1 an attractive option for budget-conscious developers and enterprises.
🌐
Analytics Vidhya
analyticsvidhya.com › home › deepseek r1 vs openai o1: which one is faster, cheaper and smarter?
DeepSeek R1 vs OpenAI o1: Which One is Faster, Cheaper and Smarter?
April 4, 2025 - The DeepSeek R1 has arrived, and ... variant. With the full-fledged release of DeepSeek R1, it now stands on par with OpenAI o1 in both performance and flexibility....
Find elsewhere
🌐
WeAreDevelopers
wearedevelopers.com › en › magazine › 542 › deepseek-r1-vs-chatgpt-o1-how-do-they-compare-542-1738080695
DeepSeek R1 vs ChatGPT o1: How Do They Compare?
Most notably, R1 is missing the ability to generate images, meaning that while it might enable creativity, the type of creativity that it enables is limited, compared to o1. So we’ve discussed the similarities in the user interface, but we’ve ...
🌐
GeekyAnts
geekyants.com › blog › deepseek-r1-vs-openais-o1-the-open-source-disruptor-raising-the-bar
DeepSeek-R1 vs. OpenAI’s o1: The Open-Source Disruptor Raising the Bar - GeekyAnts
January 26, 2025 - What’s jaw-dropping is that DeepSeek-R1 not only talks the talk with transparency, but it also walks the walk in terms of performance. DeepSeek-R1 surpasses OpenAI’s o1 in critical benchmarks—including the math-heavy AIME, the MATH-500 dataset, and coding challenges on Codeforces.
Price   $$$$
Address   No. 18, 2nd Cross Road, N S Palya, 2nd Stage, 560076, BTM Layout, Bangalore
(4.9)
🌐
KNIME
knime.com › blog › openai-o1-vs-deepseek-r1
OpenAI's o1 vs. DeepSeek-R1: Which Should You Choose? | KNIME
OpenAI’s o1 once again provides a very rich and detailed response, listing not only the steps to a successful analysis but also mentioning specific accounting indicators (e.g., EBIT, EBITDA), liquidity ratio formulas, and measures to monitor working capital management (e.g., DSO, DIO). The response also includes a final summary and is structured in a clear way. ... DeepSeek’s R1 provides clear and precise instructions, but they are notably less detailed.
🌐
DEV Community
dev.to › maximsaplin › deepseek-r1-vs-openai-o1-1ijm
Deepseek R1 vs OpenAI o1 - DEV Community
January 29, 2025 - If you are following LLM/Gen AI ... weight, lots of info on training process. It challenges OpenAI's reasoning models (o1/o1-mini) across many benchmarks at a fraction of a cost......
🌐
TextCortex
textcortex.com › home › blog posts › deepseek r1 vs openai-o1: which reasoning model is better?
DeepSeek R1 vs OpenAI-o1: Which Reasoning Model is Better?
DeepSeek R1 offers its users equal performance to the OpenAI-o1 model at much cheaper prices. DeepSeek R1 leverages a unique multi-stage training process to achieve advanced reasoning capabilities.
🌐
Reddit
reddit.com › r/singularity › notes on deepseek r1: just how good it is compared to o1
r/singularity on Reddit: Notes on Deepseek r1: Just how good it is compared to o1
July 30, 2024 -

Finally, there is a model worthy of the hype it has been getting since Claude 3.6 Sonnet. Deepseek has released something anyone hardly expected: a reasoning model on par with OpenAI’s o1 within a month of the v3 release, with an MIT license and 1/20th of o1’s cost.

This is easily the best release since GPT-4. It's wild; the general public seems excited about this, while the big AI labs are probably scrambling. It feels like things are about to speed up in the AI world. And it's all thanks to this new DeepSeek-R1 model and how they trained it. 

Some key details from the paper

  • Pure RL (GRPO) on v3-base to get r1-zero. (No Monte-Carlo Tree Search or Process Reward Modelling)

  • The model uses “Aha moments” as pivot tokens to reflect and reevaluate answers during CoT.

  • To overcome r1-zero’s readability issues, v3 was SFTd on cold start data.

  • Distillation works, small models like Qwen and Llama trained over r1 generated data show significant improvements.

Here’s an overall r0 pipeline

  • v3 base + RL (GRPO) → r1-zero

r1 training pipeline.

  1. DeepSeek-V3 Base + SFT (Cold Start Data) → Checkpoint 1

  2. Checkpoint 1 + RL (GRPO + Language Consistency) → Checkpoint 2

  3. Checkpoint 2 used to Generate Data (Rejection Sampling)

  4. DeepSeek-V3 Base + SFT (Generated Data + Other Data) → Checkpoint 3

  5. Checkpoint 3 + RL (Reasoning + Preference Rewards) → DeepSeek-R1

We know the benchmarks, but just how good is it?

Deepseek r1 vs OpenAI o1.

So, for this, I tested r1 and o1 side by side on complex reasoning, math, coding, and creative writing problems. These are the questions that o1 solved only or by none before.

Here’s what I found:

  • For reasoning, it is much better than any previous SOTA model until o1. It is better than o1-preview but a notch below o1. This is also shown in the ARC AGI bench.

  • Mathematics: It's also the same for mathematics; r1 is a killer, but o1 is better.

  • Coding: I didn’t get to play much, but on first look, it’s up there with o1, and the fact that it costs 20x less makes it the practical winner.

  • Writing: This is where R1 takes the lead. It gives the same vibes as early Opus. It’s free, less censored, has much more personality, is easy to steer, and is very creative compared to the rest, even o1-pro.

What interested me was how free the model sounded and thought traces were, akin to human internal monologue. Perhaps this is because of the less stringent RLHF, unlike US models.

The fact that you can get r1 from v3 via pure RL was the most surprising.

For in-depth analysis, commentary, and remarks on the Deepseek r1, check out this blog post: Notes on Deepseek r1

What are your experiences with the new Deepseek r1? Did you find the model useful for your use cases?

🌐
Medium
medium.com › @cognidownunder › deepseek-r1-vs-openai-o1-the-ai-underdog-thats-eating-openai-s-lunch-7cb72eac8458
DeepSeek R1 vs OpenAI o1 : The AI Underdog That’s Eating OpenAI’s Lunch | by Cogni Down Under | Medium
January 22, 2025 - It’s not just good; it’s scary good. Let’s break down the numbers, because they’re nothing short of jaw-dropping: Math Whiz: R1 scored a mind-bending 97.3% on the MATH-500 benchmark.
🌐
Venturebeat
venturebeat.com › ai › beyond-benchmarks-how-deepseek-r1-and-o1-perform-on-real-world-tasks
Beyond benchmarks: How DeepSeek-R1 and o1 perform on real-world tasks | VentureBeat
August 24, 2025 - Both models are impressive but make errors when the prompts lack specificity. o1 is slightly better at reasoning tasks but R1’s transparency gives it an edge in cases (and there will be quite a few) where it makes mistakes.
🌐
Arbisoft
arbisoft.com › blogs › ai-face-off-deep-seek-r1-vs-open-ai-s-o1-which-one-is-smarter
AI Face-Off: DeepSeek R1 vs. OpenAI’s o1 – Which One is Smarter?
DeepSeek R1 walks you through the entire thought process, making sure you understand every logical step. o1, on the other hand, keeps it short and to the point—perfect for those who just want the answer, no fluff. Which approach do you prefer?
🌐
R&D World
rdworldonline.com › home › rd world posts › this week in ai research: latest insilico medicine drug enters the clinic, a $0.55/m token model r1 rivals openai’s $60 flagship, and more
DeepSeek-R1 RL model: 95% cost cut vs. OpenAI's o1
January 23, 2025 - The model outperforms Open AI’s frontier o1 model in benchmarks. They also explained their secret sauce in the open paper. Their secret – there is no secret. It is Reinforcement Learning.” · “Reinforcement learning is your best friend and the best RL data comes from real experiments IMHO. The most valuable data we have is the clean fully-connected data from 22 PCC nominations 10 of which went clinical.” · “Read the R1 paper by DeepSeek and embrace RL.
🌐
Medium
medium.com › @charugundlavipul › deepseek-r1-vs-openai-o1-a-comparative-analysis-of-reasoning-models-3c91010a7afb
DeepSeek R1 vs. OpenAI O1: A Comparative Analysis of Reasoning Models | by Vipul Charugundla | Medium
January 28, 2025 - DeepSeek R1 uses a Mixture-of-Experts (MoE) architecture, activating only the parameters needed for a specific input. This makes it highly efficient, handling large contexts and complex reasoning without heavy resource demands.
🌐
Notta
notta.ai › en › blog › deepseek-r1-vs-openai-gpt-o1
DeepSeek R1 vs. OpenAI GPT-o1: A Cost-Conscious Alternative to the $20 Subscription
Here's why R1's transparency is ... answers. While O1's like "trust me bro, this is how you solve it," R1's walking you through its thought process in real-time, showing where it got stuck and how it figured things out...