🌐
Hugging Face
huggingface.co › deepseek-ai › DeepSeek-R1-0528-Qwen3-8B
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B · Hugging Face
November 27, 2025 - This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking.
Top answer
1 of 5
65
Those benchmarks are a meme. ArtificialAnalysis uses benchmarks established by other research groups, which are often old and overtrained, so they aren't reliable. They carefully show or hide models on default list to paint a picture of bigger models doing better, but when you enable Qwen 8B and 32B with reasoning to be shown, this all falls apart. It's nice enough to brag about a model on LinkedIn, and they are somewhat useful - they seem to be independent and the image and video arenas are great, but they're not capable of maintaining a leak-proof expert benchmarks. Look at math reasoning: DeepSeek R10528 (May '25) - 94 Qwen3 14B (reasoning) - 86 Qwen3 8B (Reasoning) - 83 DeepSeek R1 (Jan '25) - 82 DeepSeek R1 05-28 Qwen3 8B - 79 Claude 3.7 Sonnet (thinking) - 72 Overall bench (Intelligence Index) : DeepSeek R1 (Jan '25) - 60 Qwen3 32B (Reasoning) - 59 Do you believe that it makes sense for Qwen3 8B to score above DeepSeek R1 or for Claude Sonnet 3.7 to be outclassed by DeepSeek R1 05-28 Qwen3 8B with a big margin? Another bench - LiveCodeBench Qwen3 14B (Reasoning) - 52 Claude 3.7 Sonnet thinking - 47 Why are devs using Claude 3.7/4 in Windsurf/Cursor/Roo/Cline/Aider and not Qwen 3 14B? Qwen3 14B is apparently a much better coder lmao. I can't call it benchmark contamination but it's definitely overfit to benchmarks. For god's sake, when you let base Qwen 2.5 32B non-Instruct generate random tokens with trash prompt it will often generate MMLU-style questions and answer pairs out of itself. It's trained to do well at benchmarks that they test on.
2 of 5
13
i really dont trust artificial analysis rankings these days since they just aggregate other peoples old benchmarks and like they still use scicode or whatever meanwhile its literally beyond satured all models score 99% on it
Discussions

Deepseek-r1-0528-qwen3-8b is much better than expected.
Agreed, the CoT is cleaner and solved problems that OG 8B couldn’t. I hope they can do this for also the 30/32/235B too More on reddit.com
🌐 r/LocalLLaMA
56
206
May 30, 2025
DeepSeek-R1-0528-Qwen3-8B
The work that Deepseek has done is great, but it's obvious that an 8B model cannot score that high on these tests organically (at least for now). This has already been trained on the AIME and other competitions, so these benchmarks alone don't represent any real world usage. Eg, I saw someone say that Gemini 2.5 Flash is on par or better than this 8b model due to how both scored on a certain test. I wish they were right, but these benchmarks should not be taken to face value. More on reddit.com
🌐 r/LocalLLaMA
34
127
April 11, 2025
Anyone have any experience with Deepseek-R1-0528-Qwen3-8B?
Works just fine out of the box in LM Studio. More on reddit.com
🌐 r/LocalLLaMA
19
7
April 17, 2025
DeepSeek’s new R1-0528-Qwen3-8B is the most intelligent 8B parameter model yet, but not by much: Alibaba’s own Qwen3 8B is just one point behind
Those benchmarks are a meme. ArtificialAnalysis uses benchmarks established by other research groups, which are often old and overtrained, so they aren't reliable. They carefully show or hide models on default list to paint a picture of bigger models doing better, but when you enable Qwen 8B and 32B with reasoning to be shown, this all falls apart. It's nice enough to brag about a model on LinkedIn, and they are somewhat useful - they seem to be independent and the image and video arenas are great, but they're not capable of maintaining a leak-proof expert benchmarks. Look at math reasoning: DeepSeek R10528 (May '25) - 94 Qwen3 14B (reasoning) - 86 Qwen3 8B (Reasoning) - 83 DeepSeek R1 (Jan '25) - 82 DeepSeek R1 05-28 Qwen3 8B - 79 Claude 3.7 Sonnet (thinking) - 72 Overall bench (Intelligence Index) : DeepSeek R1 (Jan '25) - 60 Qwen3 32B (Reasoning) - 59 Do you believe that it makes sense for Qwen3 8B to score above DeepSeek R1 or for Claude Sonnet 3.7 to be outclassed by DeepSeek R1 05-28 Qwen3 8B with a big margin? Another bench - LiveCodeBench Qwen3 14B (Reasoning) - 52 Claude 3.7 Sonnet thinking - 47 Why are devs using Claude 3.7/4 in Windsurf/Cursor/Roo/Cline/Aider and not Qwen 3 14B? Qwen3 14B is apparently a much better coder lmao. I can't call it benchmark contamination but it's definitely overfit to benchmarks. For god's sake, when you let base Qwen 2.5 32B non-Instruct generate random tokens with trash prompt it will often generate MMLU-style questions and answer pairs out of itself. It's trained to do well at benchmarks that they test on. More on reddit.com
🌐 r/LocalLLaMA
43
136
June 5, 2025
🌐
LM Studio
lmstudio.ai › models › deepseek › deepseek-r1-0528-qwen3-8b
deepseek/deepseek-r1-0528-qwen3-8b
May 29, 2025 - This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking.
🌐
Clarifai
clarifai.com › deepseek-ai › deepseek-chat › models › DeepSeek-R1-0528-Qwen3-8B
DeepSeek-R1-0528-Qwen3-8B model | Clarifai - The World's AI
DeepSeek-R1-0528 improves reasoning and logic via better computation and optimization, nearing the performance of top models like O3 and Gemini 2.5 Pro.
🌐
OpenRouter
openrouter.ai › deepseek › deepseek-r1-0528-qwen3-8b
DeepSeek R1 0528 Qwen3 8B - API, Providers, Stats | OpenRouter
May 29, 2025 - The distilled variant, DeepSeek-R1-0528-Qwen3-8B, transfers this chain-of-thought into an 8 B-parameter form, beating standard Qwen3 8B by +10 pp and tying the 235 B “thinking” giant on AIME 2024.
🌐
Artificial Analysis
artificialanalysis.ai › models › deepseek-r1-qwen3-8b
DeepSeek R1 0528 Qwen3 8B - Intelligence, Performance & Price Analysis
Analysis of DeepSeek's DeepSeek R1 0528 Qwen3 8B and comparison to other AI models across key metrics including quality, price, performance (tokens per second & time to first token), context window & more.
🌐
Unsloth
unsloth.ai › blog › deepseek-r1-0528
How to Run Deepseek-R1-0528 Locally
DeepSeek's R1-0528 model is the most powerful open-source model. .Learn to run the model and Qwen3-8B distill with Unsloth 1.78-bit Dynamic quants.
Find elsewhere
🌐
Ollama
ollama.com › library › deepseek-r1:8b-0528-qwen3-fp16
deepseek-r1:8b-0528-qwen3-fp16
DeepSeek-R1-0528-Qwen3-8B · ollama run deepseek-r1:8b · DeepSeek-R1-Distill-Qwen-1.5B · ollama run deepseek-r1:1.5b · DeepSeek-R1-Distill-Qwen-7B · ollama run deepseek-r1:7b · DeepSeek-R1-Distill-Qwen-14B · ollama run deepseek-r1:14b · DeepSeek-R1-Distill-Qwen-32B ·
🌐
Ollama
ollama.com › sam860 › deepseek-r1-0528-qwen3:8b
sam860/deepseek-r1-0528-qwen3:8b
DeepSeek-R1-0528-Qwen3-8B represents a significant upgrade to the DeepSeek R1 model series, built on the Qwen3 architecture. This version (0528) delivers enhanced reasoning and inference capabilities through algorithmic optimization and increased ...
🌐
Hugging Face
huggingface.co › deepseek-ai › DeepSeek-R1-0528-Qwen3-8B › discussions › 11
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B · Tried it, but not good as expected.
May 30, 2025 - @SytanSD I had similar issues with the source Qwen3 8b model. It failed to answer simple questions that much smaller models like Llama 3.2 3b reliably got right, such as what's the third rock from the sun (Earth). So I suspect the primary issue is that DeepSeek used Qwen3, which so egregiously overfit to the standard LLM tests that they're riddled with pockets of profound ignorance, making them frustratingly unreliable across a spectrum of real-world tasks.
🌐
Hugging Face
huggingface.co › lmstudio-community › DeepSeek-R1-0528-Qwen3-8B-GGUF
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-GGUF · Hugging Face
Model creator: deepseek-ai Original model: DeepSeek-R1-0528-Qwen3-8B GGUF quantization: provided by bartowski based on llama.cpp release b5524 LM Studio Model Page: https://lmstudio.ai/models/deepseek/deepseek-r1-0528-qwen3-8b
🌐
Apidog
apidog.com › blog › deepseek-r1-0528-qwen-8b-local-ollama-lm-studio
Running DeepSeek R1 0528 Qwen 8B Locally: Complete Guide with Ollama and LM Studio
August 17, 2025 - Setting up DeepSeek R1 0528 in LM Studio involves navigating to the model catalog and searching for "DeepSeek R1 0528" or "Deepseek-r1-0528-qwen3-8b." The catalog displays various quantization options, allowing users to select the version that best matches their hardware capabilities.
🌐
Ollama
ollama.com › library › deepseek-r1:8b
deepseek-r1:8b
DeepSeek-R1-0528-Qwen3-8B · ollama run deepseek-r1 · DeepSeek-R1 · ollama run deepseek-r1:671b · Note: to update the model from an older version, run ollama pull deepseek-r1 ·
🌐
Reddit
reddit.com › r/localllama › deepseek-r1-0528-qwen3-8b is much better than expected.
r/LocalLLaMA on Reddit: Deepseek-r1-0528-qwen3-8b is much better than expected.
May 30, 2025 -

In the past, I tried creating agents with models smaller than 32B, but they often gave completely off-the-mark answers to commands or failed to generate the specified JSON structures correctly. However, this model has exceeded my expectations. I used to think of small models like the 8B ones as just tech demos, but it seems the situation is starting to change little by little.

First image – Structured question request
Second image – Answer

Tested : LMstudio, Q8, Temp 0.6, Top_k 0.95

🌐
Artificial Analysis
artificialanalysis.ai › models › comparisons › deepseek-r1-vs-qwen3-8b-instruct
DeepSeek R1 0528 (May '25) vs Qwen3 8B (Non-reasoning): Model Comparison
Comparison between DeepSeek R1 0528 (May '25) and Qwen3 8B (Non-reasoning) across intelligence, price, speed, context window and more.
🌐
Galaxy
blog.galaxy.ai › compare › deepseek-r1-0528-qwen3-8b-vs-phi-3-5-mini-128k-instruct
DeepSeek R1 0528 Qwen3 8B vs Phi-3.5 Mini 128K Instruct (Comparative Analysis) | Galaxy.ai
DeepSeek R1 0528 Qwen3 8B by DeepSeek offers advanced reasoning, generates structured data. It can handle standard conversations with its 32.8K token context window. Very affordable at $0.02/M input and $0.10/M output tokens.
🌐
Routstr
routstr.com › models › deepseek › deepseek-r1-0528-qwen3-8b
Deepseek R1 0528 Qwen3 8B
May 29, 2025 - The future of AI access is permissionless, private, and decentralized