🌐
Ollama
ollama.com › library › deepseek-r1:32b
deepseek-r1:32b
DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model. In this update, DeepSeek R1 has significantly improved its reasoning and inference capabilities. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic.
🌐
Ollama
ollama.com › huihui_ai › deepseek-r1-abliterated:32b
huihui_ai/deepseek-r1-abliterated:32b
This is an uncensored version of deepseek-ai/deepseek-r1 created with abliteration (see remove-refusals-with-transformers to know more about it).
🌐
GitHub
github.com › ollama › ollama
GitHub - ollama/ollama: Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
November 19, 2025 - 32B · 20GB · ollama run qwq · DeepSeek-R1 · 7B · 4.7GB · ollama run deepseek-r1 · DeepSeek-R1 · 671B · 404GB · ollama run deepseek-r1:671b · Llama 4 · 109B · 67GB · ollama run llama4:scout · Llama 4 · 400B · 245GB · ollama run llama4:maverick ·
Starred by 158K users
Forked by 14K users
Languages   Go 52.7% | C 37.2% | TypeScript 5.8% | C++ 2.0% | Objective-C 0.9% | Shell 0.7%
🌐
Ollama
ollama.com › hengwen › DeepSeek-R1-Distill-Qwen-32B:q4_k_m
hengwen/DeepSeek-R1-Distill-Qwen-32B:q4_k_m
DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from Qwen-2.5 series, which are originally licensed under Apache 2.0 License, and now finetuned with 800k samples curated with DeepSeek-R1.
🌐
Ollama
ollama.com › library › deepseek-r1:32b › blobs › 6150cb382311
deepseek-r1:32b/model
deepseek-r1:32b · 71.3M Downloads Updated 4 months ago · tools thinking 1.5b 7b 8b 14b 32b 70b 671b · deepseek-r1:32b ... / model · 6150cb382311 · 20GB · Metadata · general.architecture · qwen2 · qwen2 · general.file_type · Q4_K_M · Q4_K_M · qwen2.attention.head_count ·
🌐
Unsloth
docs.unsloth.ai › get-started › unsloth-model-catalog
Unsloth Model Catalog | Unsloth Documentation
5 days ago - DeepSeek-R1 · R1-0528 · link · R1-0528-Qwen3-8B · link · R1 · link · R1 Zero · link · Distill Llama 3 8B · link · Distill Llama 3.3 70B · link · Distill Qwen 2.5 1.5B · link · Distill Qwen 2.5 7B · link · Distill Qwen 2.5 14B · link · Distill Qwen 2.5 32B ·
🌐
Ollama
ollama.com › search
deepseek · Ollama Search
November 19, 2025 - DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.8M Pulls 35 Tags Updated 5 months ago
🌐
Ollama
ollama.com › nvjob › DeepSeek-R1-32B-Cline
nvjob/DeepSeek-R1-32B-Cline
Below are the models created via fine-tuning against several dense models widely used in the research community using reasoning data generated by DeepSeek-R1.
Find elsewhere
🌐
Ollama
ollama.com › rjmalagon › deepseek-r1-distill-qwen:32b-instruct-bf16
rjmalagon/deepseek-r1-distill-qwen:32b-instruct-bf16
deepseek-r1-distill-qwen:32b-instruct-bf16 · 84 Downloads Updated 8 months ago · tools · Updated 8 months ago · 8 months ago · cd17c8f4b497 · 66GB · · model · archqwen2 · · · parameters32.8B · · · quantization(!unknown_file_type 32!) 66GB · system · You are a helpful assistant.
🌐
Database Mart
databasemart.com › blog › deepseek-r1-32b-gpu-hosting
Best Server for DeepSeek-R1:32B Reasoning | H100 vs. RTX 4090 vs. A6000
As large language models (LLMs) like DeepSeek-R1:32B become increasingly popular for AI reasoning and inference, choosing the right GPU server is crucial for achieving optimal performance at a reasonable cost. In this article, we compare four powerful GPU dedicated servers—Nvidia H100, RTX 4090, A6000, and A5000—to determine the best option for running DeepSeek-R1:32B on Ollama efficiently.
🌐
Ollama
ollama.com › ishumilin › deepseek-r1-coder-tools:32b
ishumilin/deepseek-r1-coder-tools:32b
deepseek-r1-coder-tools:32b · 555.5K Downloads Updated 8 months ago · tools 1.5b 7b 8b 14b 32b 70b · Updated 8 months ago · 8 months ago · 5ca6ed1a6404 · 66GB · · model · archqwen2 · · · parameters32.8B · · · quantizationF16 · 66GB · template ·
🌐
Reddit
reddit.com › r/ollama › got deepseek r1 running locally - full setup guide and my personal review (free openai o1 alternative that runs locally??)
r/ollama on Reddit: Got DeepSeek R1 running locally - Full setup guide and my personal review (Free OpenAI o1 alternative that runs locally??)
January 21, 2025 -

Edit: I double-checked the model card on Ollama(https://ollama.com/library/deepseek-r1), and it does mention DeepSeek R1 Distill Qwen 7B in the metadata. So this is actually a distilled model. But honestly, that still impresses me!

Just discovered DeepSeek R1 and I'm pretty hyped about it. For those who don't know, it's a new open-source AI model that matches OpenAI o1 and Claude 3.5 Sonnet in math, coding, and reasoning tasks.

You can check out Reddit to see what others are saying about DeepSeek R1 vs OpenAI o1 and Claude 3.5 Sonnet. For me it's really good - good enough to be compared with those top models.

And the best part? You can run it locally on your machine, with total privacy and 100% FREE!!

I've got it running locally and have been playing with it for a while. Here's my setup - super easy to follow:

(Just a note: While I'm using a Mac, this guide works exactly the same for Windows and Linux users*! 👌)*

1) Install Ollama

Quick intro to Ollama: It's a tool for running AI models locally on your machine. Grab it here: https://ollama.com/download

2) Next, you'll need to pull and run the DeepSeek R1 model locally.

Ollama offers different model sizes - basically, bigger models = smarter AI, but need better GPU. Here's the lineup:

1.5B version (smallest):
ollama run deepseek-r1:1.5b

8B version:
ollama run deepseek-r1:8b

14B version:
ollama run deepseek-r1:14b

32B version:
ollama run deepseek-r1:32b

70B version (biggest/smartest):
ollama run deepseek-r1:70b

Maybe start with a smaller model first to test the waters. Just open your terminal and run:

ollama run deepseek-r1:8b

Once it's pulled, the model will run locally on your machine. Simple as that!

Note: The bigger versions (like 32B and 70B) need some serious GPU power. Start small and work your way up based on your hardware!

3) Set up Chatbox - a powerful client for AI models

Quick intro to Chatbox: a free, clean, and powerful desktop interface that works with most models. I started it as a side project for 2 years. It’s privacy-focused (all data stays local) and super easy to set up—no Docker or complicated steps. Download here: https://chatboxai.app

In Chatbox, go to settings and switch the model provider to Ollama. Since you're running models locally, you can ignore the built-in cloud AI options - no license key or payment is needed!

Then set up the Ollama API host - the default setting is http://127.0.0.1:11434, which should work right out of the box. That's it! Just pick the model and hit save. Now you're all set and ready to chat with your locally running Deepseek R1! 🚀

Hope this helps! Let me know if you run into any issues.

---------------------

Here are a few tests I ran on my local DeepSeek R1 setup (loving Chatbox's artifact preview feature btw!) 👇

Explain TCP:

Honestly, this looks pretty good, especially considering it's just an 8B model!

Make a Pac-Man game:

It looks great, but I couldn’t actually play it. I feel like there might be a few small bugs that could be fixed with some tweaking. (Just to clarify, this wasn’t done on the local model — my mac doesn’t have enough space for the largest deepseek R1 70b model, so I used the cloud model instead.)

---------------------

Honestly, I’ve seen a lot of overhyped posts about models here lately, so I was a bit skeptical going into this. But after testing DeepSeek R1 myself, I think it’s actually really solid. It’s not some magic replacement for OpenAI or Claude, but it’s surprisingly capable for something that runs locally. The fact that it’s free and works offline is a huge plus.

What do you guys think? Curious to hear your honest thoughts.

🌐
Ollama
ollama.com › hengwen › DeepSeek-R1-Distill-Qwen-32B
hengwen/DeepSeek-R1-Distill-Qwen-32B
DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from Qwen-2.5 series, which are originally licensed under Apache 2.0 License, and now finetuned with 800k samples curated with DeepSeek-R1.
🌐
Ollama
registry.ollama.ai › library › deepseek-r1:32b › blobs › 58174d17c07a
deepseek-r1:32b/template
deepseek-r1:32b · 71.9M Downloads Updated 4 months ago · tools thinking 1.5b 7b 8b 14b 32b 70b 671b · deepseek-r1:32b ... / template · 58174d17c07a ·
🌐
Hugging Face
huggingface.co › unsloth › DeepSeek-R1-Distill-Qwen-32B-GGUF › discussions › 2
unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF · Use with ollama
You can use the model with ollama by running: ollama run hf.co/unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF:DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf
🌐
Docker
hub.docker.com › layers › octaspace › deepseek › 32b › images › sha256-ab6b4f09db684d91027f29f75f602b3fdd194a4fed95362b4b9e816ec8aa7828
Image Layer Details - octaspace/deepseek:32b
Welcome to the world's largest container registry built for developers and open source contributors to find, use, and share their container images. Build, push and pull.
🌐
DataCamp
datacamp.com › tutorial › deepseek-r1-ollama
How to Set Up and Run DeepSeek-R1 Locally With Ollama | DataCamp
January 30, 2025 - Ollama offers a range of DeepSeek R1 models, spanning from 1.5B parameters to the full 671B parameter model. The 671B model is the original DeepSeek-R1, while the smaller models are distilled versions based on Qwen and Llama architectures. If your hardware cannot support the 671B model, you ...
🌐
Ollama
ollama.com › search
deepseek-r1 · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.4M Pulls 35 Tags Updated 5 months ago
🌐
Ollama
ollama.com › Yinr › deepseek-r1-ablated:32b
Yinr/deepseek-r1-ablated:32b
deepseek-r1-ablated:32b · 448 Downloads Updated 8 months ago · 32b · Updated 8 months ago · 8 months ago · cd7828067a9b · 20GB · · model · archqwen2 · · · parameters32.8B · · · quantizationQ4_K_M · 20GB · template · {{- if .System }}{{ .System }}{{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice ·