Ollama
ollama.com › library › deepseek-r1
deepseek-r1
DeepSeek-R1-0528-Qwen3-8B · ollama run deepseek-r1 · DeepSeek-R1 · ollama run deepseek-r1:671b · Note: to update the model from an older version, run ollama pull deepseek-r1 ·
Ollama
ollama.com › sam860 › deepseek-r1-0528-qwen3:8b
sam860/deepseek-r1-0528-qwen3:8b
DeepSeek-R1-0528-Qwen3-8B represents a significant upgrade to the DeepSeek R1 model series, built on the Qwen3 architecture. This version (0528) delivers enhanced reasoning and inference capabilities through algorithmic optimization and increased ...
Videos
17:15
DeepSeek R1 0528 Qwen3 8B - Small Upgraded Student Model - Install ...
24:57
DeepSeek-R1 0528 for 100% Local Chat with Your Files | Financial ...
r/LocalLLaMA on Reddit: DeepSeek-R1-0528-Qwen3-8B on iPhone 16 Pro
r/LocalLLaMA on Reddit: deepseek r1 0528 qwen 8b on android MNN chat
10:25
Run DeepSeek-R1-0528-Qwen3-8B Locally with Gaia (Easy Tutorial!)
41:46
DeepSeek R1 0528 : 8B vs 671B (Live Test) - YouTube
Ollama
ollama.com › dengcao › DeepSeek-R1-0528-Qwen3-8B
dengcao/DeepSeek-R1-0528-Qwen3-8B
This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking.
Ollama
ollama.com › search
deepseek · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 75.2M Pulls 35 Tags Updated 5 months ago
Unsloth
docs.unsloth.ai › models › deepseek-r1-0528-how-to-run-locally
DeepSeek-R1-0528: How to Run Locally | Unsloth Documentation
Qwen3 GGUF: DeepSeek-R1-0528-Qwen3-8B-GGUF · All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run & fine-tune quantized DeepSeek LLMs with minimal accuracy loss. ... NEW: Huge improvements to tool calling and chat template fixes. New TQ1_0 dynamic 1.66-bit quant - 162GB in size. Ideal for 192GB RAM (including Mac) and Ollama users.
Apidog
apidog.com › blog › deepseek-r1-0528-qwen-8b-local-ollama-lm-studio
Running DeepSeek R1 0528 Qwen 8B Locally: Complete Guide with Ollama and LM Studio
August 17, 2025 - Setting up DeepSeek R1 0528 in LM Studio involves navigating to the model catalog and searching for "DeepSeek R1 0528" or "Deepseek-r1-0528-qwen3-8b." The catalog displays various quantization options, allowing users to select the version that best matches their hardware capabilities.
Ollama
ollama.com › library › deepseek-r1:8b
deepseek-r1:8b
DeepSeek-R1-0528-Qwen3-8B · ollama run deepseek-r1 · DeepSeek-R1 · ollama run deepseek-r1:671b · Note: to update the model from an older version, run ollama pull deepseek-r1 ·
Reddit
reddit.com › r/localllama › deepseek-r1-0528-qwen3-8b is much better than expected.
r/LocalLLaMA on Reddit: Deepseek-r1-0528-qwen3-8b is much better than expected.
May 30, 2025 -
In the past, I tried creating agents with models smaller than 32B, but they often gave completely off-the-mark answers to commands or failed to generate the specified JSON structures correctly. However, this model has exceeded my expectations. I used to think of small models like the 8B ones as just tech demos, but it seems the situation is starting to change little by little.
First image – Structured question request
Second image – Answer
Tested : LMstudio, Q8, Temp 0.6, Top_k 0.95
Top answer 1 of 4
68
Agreed, the CoT is cleaner and solved problems that OG 8B couldn’t. I hope they can do this for also the 30/32/235B too
2 of 4
46
I asked it to make a web interface for my book creator tool. I gave it just the documents I created describing the project and it made a working html interface first go. Not 100% perfect but pretty bloody good for a 8b model. Dark mode works too but not perfect and some colours are similar so you cant see the text but easily fixed....
Ollama
ollama.com › library › deepseek-r1:8b-0528-qwen3-fp16
deepseek-r1:8b-0528-qwen3-fp16
DeepSeek-R1-0528-Qwen3-8B · ollama run deepseek-r1:8b · DeepSeek-R1-Distill-Qwen-1.5B · ollama run deepseek-r1:1.5b · DeepSeek-R1-Distill-Qwen-7B · ollama run deepseek-r1:7b · DeepSeek-R1-Distill-Qwen-14B · ollama run deepseek-r1:14b · DeepSeek-R1-Distill-Qwen-32B ·
DEV Community
dev.to › nodeshiftcloud › a-step-by-step-guide-to-install-deepseek-r1-0528-locally-with-ollama-vllm-or-transformers-k29
A Step-by-Step Guide to Install DeepSeek-R1-0528 Locally with Ollama, vLLM or Transformers - DEV Community
May 29, 2025 - With a reduced hallucination rate, enhanced function-calling, and impressive performance across coding, it's giving bleeding edge competition to industry giants like OpenAI’s O3 and Gemini 2.5 Pro. Its distilled version, DeepSeek-R1-0528-Qwen3-8B, even surpasses 30B+ parameter giants while staying lightweight and efficient, making it one of the most promising reasoning LLMs available today.
Reddit
reddit.com › r/localllama › deepseek-r1-0528 official benchmarks released!!!
r/LocalLLaMA on Reddit: DeepSeek-R1-0528 Official Benchmarks Released!!!
May 29, 2025 - I made some dynamic quants for Qwen 3 distilled here https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF · I'm extremely surprised DeepSeek would provide smaller distilled versions - hats off to them! ... yesterday I asked if there would be versions to run locally on 32GB vRAM and I got a lot of downvotes. Pfui. Kudos to whom made this possible. ... For the (super) lazy, any chance of publishing these on ollama with the proper configs (temperature, context size, P, template).
GitHub
github.com › ollama › ollama › issues › 10905
Deepseek-R1 Qwen 3 8B Distill · Issue #10905 · ollama/ollama
May 29, 2025 - ollama / ollama Public · Notifications · You must be signed in to change notification settings · Fork 13.9k · Star 157k · New issueCopy link · New issueCopy link · Closed · Closed · Deepseek-R1 Qwen 3 8B Distill#10905 · Copy link · Labels · modelModel requestsModel requests · numinousmuses · opened · on May 29, 2025 · Issue body actions · https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B ·
Published May 29, 2025
Reddit
reddit.com › r/localllama › deepseek-ai/deepseek-r1-0528-qwen3-8b · hugging face
r/LocalLLaMA on Reddit: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B · Hugging Face
February 22, 2025 - It sounded like you could just go to the DeepSeek page in HF and grab the GGUF from there. I looked into it and found that you can't do that, and that the only GGUFs available are through 3rd parties. Ollama also has their pages up if you google r1-0528 + the quantization annotation