🌐
Ollama
ollama.com › library › deepseek-r1:671b
deepseek-r1:671b
DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model. In this update, DeepSeek R1 has significantly improved its reasoning and inference ...
🌐
DataCamp
datacamp.com › tutorial › deepseek-r1-ollama
How to Set Up and Run DeepSeek-R1 Locally With Ollama | DataCamp
January 30, 2025 - Ollama offers a range of DeepSeek R1 models, spanning from 1.5B parameters to the full 671B parameter model. The 671B model is the original DeepSeek-R1, while the smaller models are distilled versions based on Qwen and Llama architectures.
🌐
Ollama
ollama.com › SIGJNF › deepseek-r1-671b-1.58bit
SIGJNF/deepseek-r1-671b-1.58bit
Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.
🌐
Ollama
ollama.com › Huzderu › deepseek-r1-671b-1.73bit
Huzderu/deepseek-r1-671b-1.73bit
Unsloth’s DeepSeek-R1 671B 1.73-bit dynamic quantization, merged GGUF files for Ollama.
🌐
Reddit
reddit.com › r/ollama › deepseek-r1 671b - windows 11
r/ollama on Reddit: deepseek-r1 671B - windows 11
October 7, 2024 -

Hi guys,
I got 2 questions:

1. How can i change the folder where models are downloaded / inserted to for ollama? Currently its the standard path on C i guess i want to change it to D.

ollama run deepseek-r1:671b ->

2. Will my PC be able to run it ?

Prozessor Intel(R) Core(TM) i7-14700KF 3.40 GHz

Installierter RAM 48,0 GB (47,8 GB verwendbar)

GTX 4080

🌐
Digital Spaceport
digitalspaceport.com › how-to-run-deepseek-r1-671b-fully-locally-on-2000-epyc-rig
How To Run Deepseek R1 671b Fully Locally On a $2000 EPYC Server – Digital Spaceport
Holy cow you got here! Nice job, I am impressed! Click new chat in the upper left of the window. Deepseek-r1:671b should be there already. Give it a hello. Nice job! In conclusion, we installed a fully functional bare metal Ollama + OpenWEBUI setup. I am SURE there are a lot of other great runners out there like llama.cpp, exo, and vLLM but those will be separate guides when I get a decent handle on working them.
🌐
Ollama
ollama.com › search
deepseek · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.9M Pulls 35 Tags Updated 5 months ago
🌐
Markaicode
markaicode.com › home › ollama › how to install deepseek-r1 671b with ollama v0.9.2: complete step-by-step guide 2025
How to Install DeepSeek-R1 671B with Ollama v0.9.2: Complete Step-by-Step Guide 2025 | Markaicode
June 23, 2025 - Your laptop just became smarter than most data centers. DeepSeek-R1 671B delivers reasoning performance approaching O3 and Gemini 2.5 Pro, and you can run it locally with Ollama v0.9.2.
🌐
Ollama
ollama.com › search
deepseek-r1 · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.4M Pulls 35 Tags Updated 5 months ago
Find elsewhere
🌐
Ollama
ollama.com › emsi › deepseek-r1-671b-1.58bit
emsi/deepseek-r1-671b-1.58bit
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) | | DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
🌐
Reddit
reddit.com › r/singularity › running deepseek-r1 671b locally with olama.
r/singularity on Reddit: Running DeepSeek-R1 671b locally with olama.
September 4, 2024 -

Seems like it works but is far from straightforward and you really need a beast of a local setup to run this well.

🌐
Ollama
ollama.com › library › deepseek-r1
deepseek-r1
deepseek-r1:671b · 404GB · 160K ... 160K · Text · DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model....
🌐
GitHub
github.com › ollama › ollama › issues › 8954
Deepseek-R1 model update · Issue #8954 · ollama/ollama
Yesterday there was some new model added to ollama site library Deepseek R1 family: 671b-q4_K_M Could you please tell the difference with 671b, updated 2 weeks ago? In model 671b-q4_K_M run string ...
🌐
Ollama
ollama.com › library › deepseek-r1:671b › blobs › 6e4c38e1172f
deepseek-r1:671b/license
deepseek-r1:671b · 48.2M Downloads Updated 1 week ago · thinking 1.5b 7b 8b 14b 32b 70b 671b · deepseek-r1:671b ... / license · 6e4c38e1172f · 1.1kB · MIT License · Copyright (c) 2023 DeepSeek · Permission is hereby granted, free of charge, to any person obtaining a copy ·
🌐
GitHub
github.com › ollama › ollama
GitHub - ollama/ollama: Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
ollama run deepseek-r1 · DeepSeek-R1 · 671B · 404GB · ollama run deepseek-r1:671b · Llama 4 · 109B · 67GB · ollama run llama4:scout · Llama 4 · 400B · 245GB · ollama run llama4:maverick · Llama 3.3 · 70B · 43GB · ollama run llama3.3 · Llama 3.2 · 3B ·
Starred by 158K users
Forked by 14K users
Languages   Go 52.7% | C 37.2% | TypeScript 5.8% | C++ 2.0% | Objective-C 0.9% | Shell 0.7%
🌐
GitHub
github.com › ollama › ollama › issues › 8667
deepseek-r1:671b Q4_K_M: error="model requires more system memory (446.3 GiB) than is available · Issue #8667 · ollama/ollama
January 29, 2025 - However, when I attempt to load the model using Ollama, I encounter an error indicating that the model requires more system memory (446.3 GiB) than is available (37.3 GiB). This is perplexing, given the MoE architecture’s supposed efficiency and the Q4 quantization. I came across a discussion where a user successfully ran the Q8_0 GGUF version of DeepSeek-R1 671B on a CPU-only system with 256GB of RAM.
Published   Jan 29, 2025
🌐
Ollama
ollama.com › SIGJNF › deepseek-r1-671b-1.58bit › blobs › 369ca498f347
SIGJNF/deepseek-r1-671b-1.58bit/template
Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.
🌐
ORI
ori.co › blog › how-to-run-deepseek-r1
How to run DeepSeek R1 on a cloud GPU with Ollama | Ori
Distill the reasoning capability from DeepSeek-R1 to small dense models.To equip more efficient smaller models with R1-like reasoning capabilities, DeepSeek directly fine-tuned open-source models like Qwen and Llama using the 800k samples curated with DeepSeek-R1. Distillation transfers knowledge and capabilities of a large, teacher model ( in this case DeepSeek R1 671B) to a smaller, student model (such as the Llama 3.3 70B).
🌐
Ollama
ollama.com › library › deepseek-r1:671b › blobs › 9801e7fce27d
deepseek-r1:671b/model
deepseek-r1:671b · 64.3M Downloads Updated 3 months ago · tools thinking 1.5b 7b 8b 14b 32b 70b 671b · deepseek-r1:671b ... / model · 9801e7fce27d · 404GB · Metadata · general.architecture · deepseek2 · deepseek2 · general.file_type · Q4_K_M · Q4_K_M ·