DeepSeek
deepseek.com › en
DeepSeek
© 2025 DeepSeek. All rights reserved.浙ICP备2023025841号浙B2-20250178浙公网安备33010502011812号 · ResearchDeepSeek R1DeepSeek V3DeepSeek Coder V2DeepSeek VLDeepSeek V2DeepSeek CoderDeepSeek MathDeepSeek LLM
arXiv
arxiv.org › abs › 2501.12948
[2501.12948] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
January 22, 2025 - DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities.
Videos
10:07
Dave Plummer explains Deepseek R1 - YouTube
08:33
DeepSeek R1 Explained to your grandma - YouTube
15:10
DeepSeek R1 Fully Tested - Insane Performance - YouTube
14:19
Run DeepSeek R1 Privately on Your Computer - YouTube
03:12
Run DeepSeek R1 Locally. Easiest Method - YouTube
DeepSeek-R1 Crash Course
GitHub
github.com › deepseek-ai › DeepSeek-R1
GitHub - deepseek-ai/DeepSeek-R1
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen.
Starred by 91.6K users
Forked by 11.8K users
Poe
poe.com › DeepSeek-R1
DeepSeek-R1 - Poe
Top open-source reasoning LLM rivaling OpenAI's o1 model; delivers top-tier performance across math, code, and reasoning tasks at a fraction of the cost. All data you provide this bot will not be used in training, and is sent only to Together AI, a US-based company.
ResearchGate
researchgate.net › publication › 398379274_Evaluation_of_the_accuracy_of_large_language_models_in_answering_bone_cancer-related_questions › fulltext › 6933204b0c98040d481b5823 › Evaluation-of-the-accuracy-of-large-language-models-in-answering-bone-cancer-related-questions.pdf pdf
Frontiers in Public Health 01 frontiersin.org Evaluation of the accuracy of
3 weeks ago - DeepSeek-R1 (72.5%) and OpenAI’s o1 Pro (83.4%) (17).