🌐
Reddit
reddit.com › r/localllama › personal experience with deepseek r1: it is noticeably better than claude sonnet 3.5
r/LocalLLaMA on Reddit: Personal experience with Deepseek R1: it is noticeably better than claude sonnet 3.5
January 20, 2025 -

My usecases are mainly python and R for biological data analysis, as well as a little Frontend to build some interface for my colleagues. Where deepseek V3 was failing and claude sonnet needed 4-5 prompts, R1 creates instantly whatever file I need with one prompt. I only had one case where it did not succed with one prompt, but then accidentally solved the bug when asking him to add some logs for debugging lol. It is faster and just as reliable to ask him to build me a specific python code for a one time operation than wait for excel to open my 300 Mb csv.

🌐
YouTube
youtube.com › watch
Deepseek R1 first impressions - Web + Local with Ollama - YouTube
Here's my first impressions of Deepseek R1. We check out the web interface, as well as running some local models with Ollama. https://www.deepseek.com/https:...
Published   January 21, 2025
🌐
BytePlus
byteplus.com › en › topic › 376398
Deepseek R1 Ollama Review: AI Model Performance 2025
For developers, researchers, and tech enthusiasts, staying ahead of the curve means identifying tools that offer not just power, but also accessibility and efficiency. This Deepseek R1 Ollama review provides a detailed analysis of a combination that is rapidly gaining traction: the advanced reasoning capabilities of the Deepseek R1 model and the streamlined local deployment offered by Ollama.
🌐
DEV Community
dev.to › shahdeep › how-to-deepseek-r1-run-locally-full-setup-guide-and-review-1kn2
How to Run DeepSeek R1 Locally - Full Setup Guide and Review - DEV Community
January 31, 2025 - I recently discovered DeepSeek R1, and I have to say—I’m seriously impressed. It’s an open-source AI model that competes with OpenAI’s o1 and Claude 3.5 Sonnet in math, coding, and reasoning tasks. And the best part?
🌐
Ollama
ollama.com › library › deepseek-r1
deepseek-r1
In this update, DeepSeek R1 has significantly improved its reasoning and inference capabilities. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic.
🌐
Ollama
ollama.com › huihui_ai › deepseek-r1-abliterated
huihui_ai/deepseek-r1-abliterated
“Risk of Sensitive or Controversial Outputs“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
🌐
YouTube
youtube.com › watch
DeepSeek R1 Local Ai Server LLM Testing on Ollama - YouTube
Testing out Deepseek R1 which has been lauded as a major new advancement in local ai inference quality!Running Deepseek R1 671b on $2000 EPYC Local Ai Server...
Published   January 21, 2025
🌐
Reddit
reddit.com › r/ollama › got deepseek r1 running locally - full setup guide and my personal review (free openai o1 alternative that runs locally??)
r/ollama on Reddit: Got DeepSeek R1 running locally - Full setup guide and my personal review (Free OpenAI o1 alternative that runs locally??)
January 21, 2025 -

Edit: I double-checked the model card on Ollama(https://ollama.com/library/deepseek-r1), and it does mention DeepSeek R1 Distill Qwen 7B in the metadata. So this is actually a distilled model. But honestly, that still impresses me!

Just discovered DeepSeek R1 and I'm pretty hyped about it. For those who don't know, it's a new open-source AI model that matches OpenAI o1 and Claude 3.5 Sonnet in math, coding, and reasoning tasks.

You can check out Reddit to see what others are saying about DeepSeek R1 vs OpenAI o1 and Claude 3.5 Sonnet. For me it's really good - good enough to be compared with those top models.

And the best part? You can run it locally on your machine, with total privacy and 100% FREE!!

I've got it running locally and have been playing with it for a while. Here's my setup - super easy to follow:

(Just a note: While I'm using a Mac, this guide works exactly the same for Windows and Linux users*! 👌)*

1) Install Ollama

Quick intro to Ollama: It's a tool for running AI models locally on your machine. Grab it here: https://ollama.com/download

2) Next, you'll need to pull and run the DeepSeek R1 model locally.

Ollama offers different model sizes - basically, bigger models = smarter AI, but need better GPU. Here's the lineup:

1.5B version (smallest):
ollama run deepseek-r1:1.5b

8B version:
ollama run deepseek-r1:8b

14B version:
ollama run deepseek-r1:14b

32B version:
ollama run deepseek-r1:32b

70B version (biggest/smartest):
ollama run deepseek-r1:70b

Maybe start with a smaller model first to test the waters. Just open your terminal and run:

ollama run deepseek-r1:8b

Once it's pulled, the model will run locally on your machine. Simple as that!

Note: The bigger versions (like 32B and 70B) need some serious GPU power. Start small and work your way up based on your hardware!

3) Set up Chatbox - a powerful client for AI models

Quick intro to Chatbox: a free, clean, and powerful desktop interface that works with most models. I started it as a side project for 2 years. It’s privacy-focused (all data stays local) and super easy to set up—no Docker or complicated steps. Download here: https://chatboxai.app

In Chatbox, go to settings and switch the model provider to Ollama. Since you're running models locally, you can ignore the built-in cloud AI options - no license key or payment is needed!

Then set up the Ollama API host - the default setting is http://127.0.0.1:11434, which should work right out of the box. That's it! Just pick the model and hit save. Now you're all set and ready to chat with your locally running Deepseek R1! 🚀

Hope this helps! Let me know if you run into any issues.

---------------------

Here are a few tests I ran on my local DeepSeek R1 setup (loving Chatbox's artifact preview feature btw!) 👇

Explain TCP:

Honestly, this looks pretty good, especially considering it's just an 8B model!

Make a Pac-Man game:

It looks great, but I couldn’t actually play it. I feel like there might be a few small bugs that could be fixed with some tweaking. (Just to clarify, this wasn’t done on the local model — my mac doesn’t have enough space for the largest deepseek R1 70b model, so I used the cloud model instead.)

---------------------

Honestly, I’ve seen a lot of overhyped posts about models here lately, so I was a bit skeptical going into this. But after testing DeepSeek R1 myself, I think it’s actually really solid. It’s not some magic replacement for OpenAI or Claude, but it’s surprisingly capable for something that runs locally. The fact that it’s free and works offline is a huge plus.

What do you guys think? Curious to hear your honest thoughts.

🌐
Collabnix
collabnix.com › running-deepseek-r1-with-ollama-a-complete-guide
Running DeepSeek-R1 with Ollama: A Complete Guide - Collabnix
DeepSeek-R1 is a powerful open-source language model that can be run locally using Ollama. This guide will walk you through setting up and using DeepSeek-R1, exploring its capabilities, and optimizing its performance. Model Overview DeepSeek-R1 is designed for robust reasoning and coding ...
🌐
Ollama
ollama.com › library › deepseek-r1:1.5b
deepseek-r1:1.5b
In this update, DeepSeek R1 has significantly improved its reasoning and inference capabilities. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic.
Find elsewhere
🌐
Ollama
ollama.com › library › deepseek-r1 › tags
Tags · deepseek-r1
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
🌐
Reddit
reddit.com › r/ollama › been messing around with deepseek r1 + ollama, and honestly, it's kinda wild how much you can do locally with free open-source tools. no cloud, no api keys, just your machine and some cool ai magic.
r/ollama on Reddit: Been messing around with DeepSeek R1 + Ollama, and honestly, it's kinda wild how much you can do locally with free open-source tools. No cloud, no API keys, just your machine and some cool AI magic.
December 12, 2024 -
  1. Page-Assist Chrome Extension - https://github.com/n4ze3m/page-assist (any model with any params)

  2. Open Web-UI LLM Wrapper - https://github.com/open-webui/open-webui (any model with any params)

  3. Browser use – https://github.com/browser-use/browser-use (deepseek r1:14b or more params)

  4. Roo-Code (VS Code Extension) – https://github.com/RooVetGit/Roo-Code (deepseek coder)

  5. n8n – https://github.com/n8n-io/n8n (any model with any params)

  6. A simple RAG app: https://github.com/hasan-py/chat-with-pdf-RAG (deepseek r1:8b)

  7. Ai assistant Chrome extension: https://github.com/hasan-py/Ai-Assistant-Chrome-Extension (GPT, Gemini, Grok Api, Ollama added recently)

Full installation video: https://youtu.be/hjg9kJs8al8?si=rillpsKpjONYMDYW

Anyone exploring something else? Please share- it would be highly appreciated!

🌐
Ollama
ollama.com › library › deepseek-r1 › blobs › 96c415656d37
deepseek-r1/model
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
🌐
DataCamp
datacamp.com › tutorial › deepseek-r1-ollama
How to Set Up and Run DeepSeek-R1 Locally With Ollama | DataCamp
January 30, 2025 - The ollama.chat() function takes the model name and a user prompt, processing it as a conversational exchange. The script then extracts and prints the model's response. Let’s build a simple demo app using Gradio to query and analyze documents with DeepSeek-R1.
🌐
Reddit
reddit.com › r/ollama › deepseek r1 local setup - ollama
r/ollama on Reddit: DeepSeek R1 Local Setup - Ollama
April 26, 2024 - r/ollama · Members · Online • · Prize_Appearance_67 · Share · Add a comment · Sort by: Best · Open comment sort options · Best · Top · New · Controversial · Old · Q&A · John_val · • · The small models are NOT the real R1 but just distillations.
🌐
Ollama
ollama.com › search
deepseek · Ollama Search
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro · 74.7M Pulls 35 Tags Updated 5 months ago
🌐
DigiAlps LTD
digialps.com › home › blog › run deepseek r1 locally: a full guide & my honest review of this free openai alternative
Run DeepSeek R1 Locally: A Full Guide & My Honest Review of this Free OpenAI Alternative - DigiAlps LTD
January 26, 2025 - Because DeepSeek R1 is making waves by going toe-to-toe with some of the biggest names in AI, like OpenAI’s o1 and Claude 3.5 Sonnet, especially when it comes to math, coding, and logical thinking. People online are already comparing DeepSeek R1 to OpenAI o1 and Claude 3.5 Sonnet, and from my own testing using ollama and chatbox ,the hype seems real.
🌐
BytePlus
byteplus.com › en › topic › 405441
DeepSeek r1 ollama API review
Build better products, deliver richer experiences, and accelerate growth through our wide range of intelligent solutions. Core content of this page: DeepSeek r1 ollama API review
🌐
Reddit
reddit.com › r/ollama › my first every test drive of the deepseek-r1
r/ollama on Reddit: My first every test drive of the deepseek-r1
February 28, 2024 -

I downloaded DeepSeek-R1 last night and tested it on my Nvidia GeForce 4060 (8GB) paired with an Intel i7 and 64GB of RAM. My first impressions are really positive—it handles conversation and reasoning surprisingly well, especially for a locally running model. However, unlike other models such as LLaMA 3.3, I can’t seem to make it follow my instructions precisely. If you look at the screenshot, you’ll see that it doesn’t produce the exact output format I requested. I’m wondering if I’m missing a step or if this is just how the model behaves.

🌐
Ollama
ollama.com › MFDoom › deepseek-r1-tool-calling
MFDoom/deepseek-r1-tool-calling
DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen. With Tool Calling support.