🌐
GitHub
github.com › ollama › ollama
GitHub - ollama/ollama: Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
32B · 20GB · ollama run qwq · DeepSeek-R1 · 7B · 4.7GB · ollama run deepseek-r1 · DeepSeek-R1 · 671B · 404GB · ollama run deepseek-r1:671b · Llama 4 · 109B · 67GB · ollama run llama4:scout · Llama 4 · 400B · 245GB · ollama run llama4:maverick ·
Starred by 158K users
Forked by 14K users
Languages   Go 52.7% | C 37.2% | TypeScript 5.8% | C++ 2.0% | Objective-C 0.9% | Shell 0.7%
🌐
Ollama
ollama.com › library › deepseek-r1
deepseek-r1
DeepSeek-R1-Distill-Qwen-32B · ollama run deepseek-r1:32b · DeepSeek-R1-Distill-Llama-70B · ollama run deepseek-r1:70b · The model weights are licensed under the MIT License. DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs.
🌐
GitHub
github.com › ollama › ollama › issues › 8655
GPU process at 1-3% when running Deepseek R1 32b · Issue #8655 · ollama/ollama
January 29, 2025 - What is the issue? im trying to run deepseek r1 32b locally. It runs but the GPU barely used. when it processing a simple task like multipying numbers, i saw in task manager that the gpu barely used at 1-3%, while the cpu at 70%. i have ...
Published   Jan 29, 2025
🌐
Ollama
ollama.com › library › deepseek-r1:32b
deepseek-r1:32b
DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model. In this update, DeepSeek R1 has significantly improved its reasoning and inference ...
🌐
GitHub
github.com › ollama › ollama › issues › 8725
”Error: invalid file magic“When running deepseek-r1:32b · Issue #8725 · ollama/ollama
ollama / ollama Public · Notifications · You must be signed in to change notification settings · Fork 10.7k · Star 131k · New issueCopy link · New issueCopy link · Closed · Closed · ”Error: invalid file magic“When running deepseek-r1:32b#8725 · Copy link · Labels ·
🌐
GitHub
github.com › browser-use › web-ui › issues › 245
ollama with deepseek-r1:32b keeps failing · Issue #245 · browser-use/web-ui
Trying ollama with deepseek-r1:32b but it keeps failing! I disabled the Use Vision setting. Let me know if I need to provide more info to debug the problem! ` git log commit 037f8e5 (HEAD -> mai...
🌐
GitHub
github.com › phidatahq › phidata › issues › 1933
Tool doesn't support deepseek-r1:32b model · Issue #1933 · ...
December 5, 2024 - Hello Everyone, As you know, deepseek-r1 is being widely discussed lately. Ollama supports this model with several variants like deepseek-r1:32b, deepseek-r1:7b etc. When I tried to use the "d...
Published   Jan 30, 2025
🌐
Ollama
ollama.com › library › deepseek-r1 › tags
Tags · deepseek-r1
deepseek-r1 · 74.9M Downloads Updated 5 months ago · tools thinking 1.5b 7b 8b 14b 32b 70b 671b · Name · 35 models · Size · Context · Input · deepseek-r1:latest · 6995872bfe4c • 5.2GB • 128K context window • Text input • 6 months ago · Text input • 6 months ago ·
🌐
GitHub
github.com › open-webui › open-webui › discussions › 9643
The local deepseek r1 32b deployed by Ollama cannot respond properly. · open-webui/open-webui · Discussion #9643
Bug Report I deployed DeepSeek R1 32B locally using Ollama, from https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-GGUF. When I selected this model in OpenWebUI, the responses were comp...
Author   open-webui
Find elsewhere
🌐
GitHub
github.com › deepseek-ai › DeepSeek-R1
GitHub - deepseek-ai/DeepSeek-R1
To support the research community, ... and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models....
Starred by 91.6K users
Forked by 11.8K users
🌐
GitHub
github.com › topics › deepseek-r1
deepseek-r1 · GitHub Topics · GitHub
cuda inference pytorch transformer openai moe llama vlm kimi blackwell llm llm-serving llava deepseek llama3 deepseek-v3 deepseek-r1 qwen3 gpt-oss deepseek-v3-2 ... 🔥 MaxKB is an open-source platform for building enterprise-grade agents. 强大易用的开源企业级智能体平台。 · agent chatbot knowledgebase rag llm langchain pgvector ollama maxkb llama3 agentic-ai mcp-server deepseek-r1 qwen3
🌐
Ollama
ollama.com › library › deepseek-r1:32b › blobs › 6150cb382311
deepseek-r1:32b/model
deepseek-r1:32b · 71.3M Downloads Updated 4 months ago · tools thinking 1.5b 7b 8b 14b 32b 70b 671b · deepseek-r1:32b ... / model · 6150cb382311 · 20GB · Metadata · general.architecture · qwen2 · qwen2 · general.file_type · Q4_K_M · Q4_K_M · qwen2.attention.head_count ·
🌐
Reddit
reddit.com › r/ollama › got deepseek r1 running locally - full setup guide and my personal review (free openai o1 alternative that runs locally??)
r/ollama on Reddit: Got DeepSeek R1 running locally - Full setup guide and my personal review (Free OpenAI o1 alternative that runs locally??)
January 21, 2025 -

Edit: I double-checked the model card on Ollama(https://ollama.com/library/deepseek-r1), and it does mention DeepSeek R1 Distill Qwen 7B in the metadata. So this is actually a distilled model. But honestly, that still impresses me!

Just discovered DeepSeek R1 and I'm pretty hyped about it. For those who don't know, it's a new open-source AI model that matches OpenAI o1 and Claude 3.5 Sonnet in math, coding, and reasoning tasks.

You can check out Reddit to see what others are saying about DeepSeek R1 vs OpenAI o1 and Claude 3.5 Sonnet. For me it's really good - good enough to be compared with those top models.

And the best part? You can run it locally on your machine, with total privacy and 100% FREE!!

I've got it running locally and have been playing with it for a while. Here's my setup - super easy to follow:

(Just a note: While I'm using a Mac, this guide works exactly the same for Windows and Linux users*! 👌)*

1) Install Ollama

Quick intro to Ollama: It's a tool for running AI models locally on your machine. Grab it here: https://ollama.com/download

2) Next, you'll need to pull and run the DeepSeek R1 model locally.

Ollama offers different model sizes - basically, bigger models = smarter AI, but need better GPU. Here's the lineup:

1.5B version (smallest):
ollama run deepseek-r1:1.5b

8B version:
ollama run deepseek-r1:8b

14B version:
ollama run deepseek-r1:14b

32B version:
ollama run deepseek-r1:32b

70B version (biggest/smartest):
ollama run deepseek-r1:70b

Maybe start with a smaller model first to test the waters. Just open your terminal and run:

ollama run deepseek-r1:8b

Once it's pulled, the model will run locally on your machine. Simple as that!

Note: The bigger versions (like 32B and 70B) need some serious GPU power. Start small and work your way up based on your hardware!

3) Set up Chatbox - a powerful client for AI models

Quick intro to Chatbox: a free, clean, and powerful desktop interface that works with most models. I started it as a side project for 2 years. It’s privacy-focused (all data stays local) and super easy to set up—no Docker or complicated steps. Download here: https://chatboxai.app

In Chatbox, go to settings and switch the model provider to Ollama. Since you're running models locally, you can ignore the built-in cloud AI options - no license key or payment is needed!

Then set up the Ollama API host - the default setting is http://127.0.0.1:11434, which should work right out of the box. That's it! Just pick the model and hit save. Now you're all set and ready to chat with your locally running Deepseek R1! 🚀

Hope this helps! Let me know if you run into any issues.

---------------------

Here are a few tests I ran on my local DeepSeek R1 setup (loving Chatbox's artifact preview feature btw!) 👇

Explain TCP:

Honestly, this looks pretty good, especially considering it's just an 8B model!

Make a Pac-Man game:

It looks great, but I couldn’t actually play it. I feel like there might be a few small bugs that could be fixed with some tweaking. (Just to clarify, this wasn’t done on the local model — my mac doesn’t have enough space for the largest deepseek R1 70b model, so I used the cloud model instead.)

---------------------

Honestly, I’ve seen a lot of overhyped posts about models here lately, so I was a bit skeptical going into this. But after testing DeepSeek R1 myself, I think it’s actually really solid. It’s not some magic replacement for OpenAI or Claude, but it’s surprisingly capable for something that runs locally. The fact that it’s free and works offline is a huge plus.

What do you guys think? Curious to hear your honest thoughts.

🌐
Ollama
ollama.com › library › deepseek-r1:1.5b
deepseek-r1:1.5b
deepseek-r1:1.5b · 75M Downloads Updated 5 months ago · tools thinking 1.5b 7b 8b 14b 32b 70b 671b · Updated 7 months ago · 7 months ago · e0979632db5a · 1.1GB · · model · archqwen2 · · · parameters1.78B · · · quantizationQ4_K_M · 1.1GB · license ·
🌐
FreeBSD
forums.freebsd.org › miscellaneous › off-topic
Ollama working with deepseek-r1, deepseek-coder, mistral, Nvidia gpu and emacs with gptel on Freebsd 14.2 | The FreeBSD Forums
January 21, 2025 - github.com here is the a video tutorial on setting up ollama on Freebsd ... Ollama run large language models on your computer including deepseek-r1, deepseek-coder, mistral and zephyr In this video i install Ollama on Freebsd 14.2 on a Dell XPS 15 2019 with a NVIDIA GeForce GTX 1650 gpu with 16 gig of ram on Freebsd 14.2 quarterly release with the 550.127.05 Nvidia driver
🌐
Ollama
ollama.com › search
deepseek · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.9M Pulls 35 Tags Updated 5 months ago
🌐
Ollama
ollama.com › search
deepseek-r1 · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.4M Pulls 35 Tags Updated 5 months ago
🌐
Reddit
reddit.com › r/ollama › deepseek-r1 is now in ollama's models library
r/ollama on Reddit: deepseek-r1 is now in Ollama's Models library
December 14, 2024 - For the smaller ones, maybe up to 4-8gb vram maybe 12 or more for the 14b one (depending on the context length). But for the larger ones you need 16-24gb vram for the 32b one and 48gb for the 70b one. ... Running the Deepseek-R1 - 14B 4_K_M on a 6th gen i7 (6700), 16 GB RAM and RTX 3600 with 12 GB VRAM.
🌐
GitHub
github.com › ollama › ollama › issues › 8669
deepseek-r1:32b do not support tools? qwen2.5 base model should support. · Issue #8669 · ollama/ollama
January 29, 2025 - What is the issue? when i use autogen, deepseek-r1:32b raise error: model do not support tools. OS WSL2 GPU Nvidia CPU Intel Ollama version 0.5.7
Published   Jan 29, 2025
🌐
Reddit
reddit.com › r/ollama › been messing around with deepseek r1 + ollama, and honestly, it's kinda wild how much you can do locally with free open-source tools. no cloud, no api keys, just your machine and some cool ai magic.
r/ollama on Reddit: Been messing around with DeepSeek R1 + Ollama, and honestly, it's kinda wild how much you can do locally with free open-source tools. No cloud, no API keys, just your machine and some cool AI magic.
December 10, 2024 -
  1. Page-Assist Chrome Extension - https://github.com/n4ze3m/page-assist (any model with any params)

  2. Open Web-UI LLM Wrapper - https://github.com/open-webui/open-webui (any model with any params)

  3. Browser use – https://github.com/browser-use/browser-use (deepseek r1:14b or more params)

  4. Roo-Code (VS Code Extension) – https://github.com/RooVetGit/Roo-Code (deepseek coder)

  5. n8n – https://github.com/n8n-io/n8n (any model with any params)

  6. A simple RAG app: https://github.com/hasan-py/chat-with-pdf-RAG (deepseek r1:8b)

  7. Ai assistant Chrome extension: https://github.com/hasan-py/Ai-Assistant-Chrome-Extension (GPT, Gemini, Grok Api, Ollama added recently)

Full installation video: https://youtu.be/hjg9kJs8al8?si=rillpsKpjONYMDYW

Anyone exploring something else? Please share- it would be highly appreciated!