🌐
Ollama
ollama.com › library › deepseek-r1:671b
deepseek-r1:671b
DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model. In this update, DeepSeek R1 has significantly improved its reasoning and inference ...
🌐
DataCamp
datacamp.com › tutorial › deepseek-r1-ollama
How to Set Up and Run DeepSeek-R1 Locally With Ollama | DataCamp
January 30, 2025 - Ollama offers a range of DeepSeek R1 models, spanning from 1.5B parameters to the full 671B parameter model. The 671B model is the original DeepSeek-R1, while the smaller models are distilled versions based on Qwen and Llama architectures.
🌐
Ollama
ollama.com › SIGJNF › deepseek-r1-671b-1.58bit
SIGJNF/deepseek-r1-671b-1.58bit
Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.
🌐
Ollama
ollama.com › Huzderu › deepseek-r1-671b-1.73bit
Huzderu/deepseek-r1-671b-1.73bit
Unsloth’s DeepSeek-R1 671B 1.73-bit dynamic quantization, merged GGUF files for Ollama.
🌐
Digital Spaceport
digitalspaceport.com › how-to-run-deepseek-r1-671b-fully-locally-on-2000-epyc-rig
How To Run Deepseek R1 671b Fully Locally On a $2000 EPYC Server – Digital Spaceport
Holy cow you got here! Nice job, I am impressed! Click new chat in the upper left of the window. Deepseek-r1:671b should be there already. Give it a hello. Nice job! In conclusion, we installed a fully functional bare metal Ollama + OpenWEBUI setup. I am SURE there are a lot of other great runners out there like llama.cpp, exo, and vLLM but those will be separate guides when I get a decent handle on working them.
🌐
Reddit
reddit.com › r/ollama › deepseek-r1 671b - windows 11
r/ollama on Reddit: deepseek-r1 671B - windows 11
October 9, 2024 -

Hi guys,
I got 2 questions:

1. How can i change the folder where models are downloaded / inserted to for ollama? Currently its the standard path on C i guess i want to change it to D.

ollama run deepseek-r1:671b ->

2. Will my PC be able to run it ?

Prozessor Intel(R) Core(TM) i7-14700KF 3.40 GHz

Installierter RAM 48,0 GB (47,8 GB verwendbar)

GTX 4080

🌐
Ollama
ollama.com › emsi › deepseek-r1-671b-1.58bit
emsi/deepseek-r1-671b-1.58bit
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
🌐
Ollama
ollama.com › search
deepseek · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.8M Pulls 35 Tags Updated 5 months ago
🌐
Reddit
reddit.com › r/singularity › running deepseek-r1 671b locally with olama.
r/singularity on Reddit: Running DeepSeek-R1 671b locally with olama.
September 5, 2024 -

Seems like it works but is far from straightforward and you really need a beast of a local setup to run this well.

Find elsewhere
🌐
ORI
ori.co › blog › how-to-run-deepseek-r1
How to run DeepSeek R1 on a cloud GPU with Ollama | Ori
Distill the reasoning capability from DeepSeek-R1 to small dense models.To equip more efficient smaller models with R1-like reasoning capabilities, DeepSeek directly fine-tuned open-source models like Qwen and Llama using the 800k samples curated with DeepSeek-R1. Distillation transfers knowledge and capabilities of a large, teacher model ( in this case DeepSeek R1 671B) to a smaller, student model (such as the Llama 3.3 70B).
🌐
Ollama
ollama.com › search
deepseek-r1 · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.4M Pulls 35 Tags Updated 5 months ago
🌐
GitHub
github.com › ollama › ollama › issues › 8667
deepseek-r1:671b Q4_K_M: error="model requires more system memory (446.3 GiB) than is available · Issue #8667 · ollama/ollama
January 29, 2025 - However, when I attempt to load the model using Ollama, I encounter an error indicating that the model requires more system memory (446.3 GiB) than is available (37.3 GiB). This is perplexing, given the MoE architecture’s supposed efficiency and the Q4 quantization. I came across a discussion where a user successfully ran the Q8_0 GGUF version of DeepSeek-R1 671B on a CPU-only system with 256GB of RAM.
Published   Jan 29, 2025
🌐
Snowkylin
snowkylin.github.io › blogs › a-note-on-deepseek-r1.html
A Note on DeepSeek R1 Deployment
This is a (minimal) note on deploying DeepSeek R1 671B (the full version without distillation) locally with ollama.
🌐
Ollama
ollama.com › library › deepseek-r1:671b › blobs › 6e4c38e1172f
deepseek-r1:671b/license
deepseek-r1:671b · 48.2M Downloads Updated 1 week ago · thinking 1.5b 7b 8b 14b 32b 70b 671b · deepseek-r1:671b ... / license · 6e4c38e1172f · 1.1kB · MIT License · Copyright (c) 2023 DeepSeek · Permission is hereby granted, free of charge, to any person obtaining a copy ·
🌐
Ollama
ollama.com › library › deepseek-r1
deepseek-r1
deepseek-r1:671b · 404GB · 160K ... 160K · Text · DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model....
🌐
Ollama
ollama.com › library › deepseek-r1:671b › blobs › 9801e7fce27d
deepseek-r1:671b/model
deepseek-r1:671b · 64.3M Downloads Updated 3 months ago · tools thinking 1.5b 7b 8b 14b 32b 70b 671b · deepseek-r1:671b ... / model · 9801e7fce27d · 404GB · Metadata · general.architecture · deepseek2 · deepseek2 · general.file_type · Q4_K_M · Q4_K_M ·
🌐
Markaicode
markaicode.com › home › ollama › how to install deepseek-r1 671b with ollama v0.9.2: complete step-by-step guide 2025
How to Install DeepSeek-R1 671B with Ollama v0.9.2: Complete Step-by-Step Guide 2025 | Markaicode
June 23, 2025 - Your laptop just became smarter than most data centers. DeepSeek-R1 671B delivers reasoning performance approaching O3 and Gemini 2.5 Pro, and you can run it locally with Ollama v0.9.2.
🌐
Reddit
reddit.com › r/ollama › how to deploy deepseek-r1∶671b locally using ollama?
r/ollama on Reddit: How to deploy deepseek-r1∶671b locally using Ollama?
April 11, 2024 -

I have 8 A100, each with 40GB video memory, and 1TB of RAM. How to deploy deepseek-r1∶671b locally? I cannot load the model using the video memory alone. Is there any parameter that Ollama can configure to load the model using my 1TB of RAM? thanks

🌐
Reddit
reddit.com › r/ollama › deepseek r1 671b on my local pc
r/ollama on Reddit: Deepseek r1 671b on my local PC
March 9, 2024 -

Hello everyone,

Two days ago, I turned night into day, and in the end, I managed to get R1 running on my local PC. Yesterday, I uploaded a video on YouTube showing how I did it: https://www.youtube.com/watch?v=O3Lk3xSkAdk

I don't post here often, so I'm not sure if sharing the link is okay—I hope it is.

The video is in German, but with subtitles, everyone should be able to understand it.
Be careful if you want to try this yourself! ;)

Update:

For those who don't feel like watching the video: The "trick" was using Windows' pagefile. I set up three of them on three different SSDs, which gave me around 750GB of virtual memory in total.

Loading the model and answering a question took my PC about 90 minutes.

🌐
Ollama
ollama.com › huihui_ai › deepseek-r1:671b
huihui_ai/deepseek-r1:671b
deepseek-r1 · 671b · 50 Pulls Updated 4 weeks ago · 671b · 3 Tags · Updated 4 weeks ago · 4 weeks ago · 93f8490a2eb1 · 244GB · model · archdeepseek2 · · · parameters671B · · · quantizationQ2_K · 244GB · params · { "stop": [ "<|begin▁of▁sentence|>", "<|end▁of▁sentence|>", 148B ·