🌐
Ollama
ollama.com › library › deepseek-r1
deepseek-r1
DeepSeek-R1 has received a minor version upgrade to DeepSeek-R1-0528 for the 8 billion parameter distilled model and the full 671 billion parameter model. In this update, DeepSeek R1 has significantly improved its reasoning and inference capabilities. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic.
🌐
Ollama
ollama.com › search
deepseek · Ollama Search
November 19, 2025 - DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.7M Pulls 35 Tags Updated 5 months ago
Discussions

Official DeepSeek R1 Now on Ollama
From the article https://www.science.org/content/article/chinese-firm-s-faste · I understand and relate to having to make changes to manage political realities, at the same time I'm not sure how comfortable I am using an LLM lying to me about something like this. More on news.ycombinator.com
🌐 news.ycombinator.com
80
234
January 30, 2025
Deepseek R1 (Ollama) Hardware benchmark for LocalLLM : LocalLLaMA
Deepseek R1 was released and looks like one of the best models for local LLM. I tested it on some GPUs to see how many tps it can achieve. Tests... More on old.reddit.com
🌐 r/LocalLLaMA
Good UI for DeepSeek R1 : ollama
The ollama community on Reddit. Reddit gives you the best of the internet in one place. More on old.reddit.com
🌐 r/ollama
Been messing around with DeepSeek R1 + Ollama, and honestly, it's kinda wild how much you can do locally with free open-source tools. No cloud, no API keys, just your machine and some cool AI magic.
Know if any good text to speech fast enough for conversation? I have kokoro 82m which is fast but flat. No emotion More on reddit.com
🌐 r/ollama
172
1004
December 12, 2024
🌐
Reddit
reddit.com › r/ollama › got deepseek r1 running locally - full setup guide and my personal review (free openai o1 alternative that runs locally??)
r/ollama on Reddit: Got DeepSeek R1 running locally - Full setup guide and my personal review (Free OpenAI o1 alternative that runs locally??)
January 21, 2025 -

Edit: I double-checked the model card on Ollama(https://ollama.com/library/deepseek-r1), and it does mention DeepSeek R1 Distill Qwen 7B in the metadata. So this is actually a distilled model. But honestly, that still impresses me!

Just discovered DeepSeek R1 and I'm pretty hyped about it. For those who don't know, it's a new open-source AI model that matches OpenAI o1 and Claude 3.5 Sonnet in math, coding, and reasoning tasks.

You can check out Reddit to see what others are saying about DeepSeek R1 vs OpenAI o1 and Claude 3.5 Sonnet. For me it's really good - good enough to be compared with those top models.

And the best part? You can run it locally on your machine, with total privacy and 100% FREE!!

I've got it running locally and have been playing with it for a while. Here's my setup - super easy to follow:

(Just a note: While I'm using a Mac, this guide works exactly the same for Windows and Linux users*! 👌)*

1) Install Ollama

Quick intro to Ollama: It's a tool for running AI models locally on your machine. Grab it here: https://ollama.com/download

2) Next, you'll need to pull and run the DeepSeek R1 model locally.

Ollama offers different model sizes - basically, bigger models = smarter AI, but need better GPU. Here's the lineup:

1.5B version (smallest):
ollama run deepseek-r1:1.5b

8B version:
ollama run deepseek-r1:8b

14B version:
ollama run deepseek-r1:14b

32B version:
ollama run deepseek-r1:32b

70B version (biggest/smartest):
ollama run deepseek-r1:70b

Maybe start with a smaller model first to test the waters. Just open your terminal and run:

ollama run deepseek-r1:8b

Once it's pulled, the model will run locally on your machine. Simple as that!

Note: The bigger versions (like 32B and 70B) need some serious GPU power. Start small and work your way up based on your hardware!

3) Set up Chatbox - a powerful client for AI models

Quick intro to Chatbox: a free, clean, and powerful desktop interface that works with most models. I started it as a side project for 2 years. It’s privacy-focused (all data stays local) and super easy to set up—no Docker or complicated steps. Download here: https://chatboxai.app

In Chatbox, go to settings and switch the model provider to Ollama. Since you're running models locally, you can ignore the built-in cloud AI options - no license key or payment is needed!

Then set up the Ollama API host - the default setting is http://127.0.0.1:11434, which should work right out of the box. That's it! Just pick the model and hit save. Now you're all set and ready to chat with your locally running Deepseek R1! 🚀

Hope this helps! Let me know if you run into any issues.

---------------------

Here are a few tests I ran on my local DeepSeek R1 setup (loving Chatbox's artifact preview feature btw!) 👇

Explain TCP:

Honestly, this looks pretty good, especially considering it's just an 8B model!

Make a Pac-Man game:

It looks great, but I couldn’t actually play it. I feel like there might be a few small bugs that could be fixed with some tweaking. (Just to clarify, this wasn’t done on the local model — my mac doesn’t have enough space for the largest deepseek R1 70b model, so I used the cloud model instead.)

---------------------

Honestly, I’ve seen a lot of overhyped posts about models here lately, so I was a bit skeptical going into this. But after testing DeepSeek R1 myself, I think it’s actually really solid. It’s not some magic replacement for OpenAI or Claude, but it’s surprisingly capable for something that runs locally. The fact that it’s free and works offline is a huge plus.

What do you guys think? Curious to hear your honest thoughts.

🌐
DataCamp
datacamp.com › tutorial › deepseek-r1-ollama
How to Set Up and Run DeepSeek-R1 Locally With Ollama | DataCamp
January 30, 2025 - Learn how to install, set up, and run DeepSeek-R1 locally with Ollama and build a simple RAG application.
🌐
DEV Community
dev.to › ajmal_hasan › setting-up-ollama-running-deepseek-r1-locally-for-a-powerful-rag-system-4pd4
🚀 Setting Up Ollama & Running DeepSeek R1 Locally for a Powerful RAG System - DEV Community
January 28, 2025 - DeepSeek R1 is an open-source AI model optimized for reasoning, problem-solving, and factual retrieval. 🔹 Why use it? Strong logical capabilities, great for RAG applications, and can be run locally with Ollama.
🌐
GitHub
github.com › ollama › ollama
GitHub - ollama/ollama: Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
November 19, 2025 - Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models. - ollama/ollama
Starred by 158K users
Forked by 14K users
Languages   Go 52.7% | C 37.2% | TypeScript 5.8% | C++ 2.0% | Objective-C 0.9% | Shell 0.7%
🌐
Codecademy
codecademy.com › article › how-to-run-deepseek-r-1-locally
How to Run Deepseek R1 Locally | Codecademy
To begin using Deepseek R1, you first need to download the model. Run the following command in the terminal to download Deepseek R1: ... After downloading the model, you’re ready to start using it. Now that you have the model downloaded, you need to start the Ollama server to run Deepseek R1.
Find elsewhere
🌐
Daehnhardt
daehnhardt.com › blog › 2025 › 01 › 28 › deepseek-with-ollama
DeepSeek R1 With Ollama
January 28, 2025 - Ollama: A command-line tool to run Llama-based models. DeepSeek R1: A new language model from China that’s gaining attention quickly.
🌐
Ollama
ollama.com › MFDoom › deepseek-r1-tool-calling
MFDoom/deepseek-r1-tool-calling
DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen. With Tool Calling support.
🌐
Ollama
ollama.com › search
deepseek-r1 · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.4M Pulls 35 Tags Updated 5 months ago
🌐
Adex
adex.ltd › deploy-deepseek-r1-llm-locally-with-ollama-and-open-webui
How to Deploy DeepSeek-R1 Locally with Ollama and Open WebUI
February 14, 2025 - Learn how to deploy DeepSeek-R1 locally using Ollama and Open WebUI. Follow this step-by-step guide for installation, model setup, and performance tuning.
🌐
Apidog
apidog.com › blog › rag-deepseek-r1-ollama
Build a RAG System with DeepSeek R1 & Ollama
October 16, 2025 - Here, you instantiate a RetrievalQA chain using Deepseek R1 1.5B as the local LLM. llm = Ollama(model="deepseek-r1:1.5b") # Our 1.5B parameter model # Craft the prompt template prompt = """ 1. Use ONLY the context below. 2. If unsure, say "I don’t know". 3. Keep answers under 4 sentences.
🌐
NodeShift Cloud
nodeshift.cloud › blog › a-step-by-step-guide-to-install-deepseek-r1-locally-with-ollama-vllm-or-transformers-2
A Step-by-Step Guide to Install DeepSeek-R1 Locally with Ollama, vLLM or Transformers
January 27, 2025 - DeepSeek-R1 is making waves in the AI community as a powerful open-source reasoning model, offering advanced capabilities that challenge industry leaders like OpenAI’s o1 without the hefty price tag.
🌐
Unsloth
docs.unsloth.ai › basics › deepseek-v3.1
DeepSeek-V3.1 | Unsloth Documentation
August 23, 2025 - We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload! (NEW) To run the full R1-0528 model in Ollama, you can use our TQ1_0 (170GB quant): ... OLLAMA_MODELS=unsloth ollama serve & OLLAMA_MODELS=unsloth ollama run hf.co/unsloth/DeepSeek-V3.1-GGUF:TQ1_0
🌐
Medium
medium.com › google-cloud › deepseek-r1-unleashed-gke-ollama-and-vllm-deep-dive-1b707eeca26f
DeepSeek R1: Ollama vs. vLLM on GKE | Google Cloud - Community
February 27, 2025 - When running this, make sure you’ve got enough available storage space, Deepseek R1 Q4_K_M Quantization weight over 400GB. If all goes well (and it should), you’ll see something like this. kubectl -n ollama logs deploy/ollama -f --tail 10 llama_new_context_with_model: KV self size = 39040.00 MiB, K (f16): 23424.00 MiB, V (f16): 15616.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.08 MiB llama_new_context_with_model: pipeline parallelism enabled (n_copies=4) llama_new_context_with_model: CUDA0 compute buffer size = 2322.01 MiB llama_new_context_with_model: CUDA1 compu
🌐
Hacker News
news.ycombinator.com › item
Official DeepSeek R1 Now on Ollama | Hacker News
January 30, 2025 - From the article https://www.science.org/content/article/chinese-firm-s-faste · I understand and relate to having to make changes to manage political realities, at the same time I'm not sure how comfortable I am using an LLM lying to me about something like this.
🌐
GitConnected
levelup.gitconnected.com › run-deepseek-r1-locally-for-free-using-ollama-a5761106ae0e
Run DeepSeek-R1 Locally for Free Using Ollama! | by Pavan Belagatti | Level Up Coding
January 31, 2025 - Run DeepSeek-R1 Locally for Free Using Ollama! DeepSeek-R1 has been creating quite a buzz in the AI community. Developed by a Chinese AI company DeepSeek, this model is being compared to OpenAI’s …
🌐
Medium
medium.com › @pratikabnave97 › building-an-end-to-end-gen-ai-app-with-deepseek-r1-langchain-and-ollama-6c6ff2e5c627
Building an End-to-End Gen AI App with DeepSeek-R1, Langchain, and Ollama | by Pratik Abnave | Medium
January 31, 2025 - Capture user queries and process them using DeepSeek R1. Here’s a simplified chat engine implementation: from langchain_ollama import ChatOllama from langchain_core.output_parsers import StrOutputParser
🌐
Collabnix
collabnix.com › running-deepseek-r1-with-ollama-a-complete-guide
Running DeepSeek-R1 with Ollama: A Complete Guide - Collabnix
DeepSeek-R1 is a powerful open-source language model that can be run locally using Ollama. This guide will walk you through setting up and using DeepSeek-R1, exploring its capabilities, and optimizing its performance. Model Overview DeepSeek-R1 is designed for robust reasoning and coding ...
🌐
Medium
sivachandanc.medium.com › running-deepseek-r1-locally-using-ollama-8f9dedc3bea5
Running Deepseek R1 locally using Ollama | by Siva Chandan C | Medium
February 4, 2025 - Download it from here: https://ollama.com/download · Once installed, go to your terminal and run this command to check the installed version. ... You can pull Deepseek R1 using Ollama, I have chosen the R1 model with 1.5 billion parameters.(The smallest one!!)