If what you want to do can be done with llama then it’s not serious enough for you be spending “lots” of money on Claude. Claude is cheap. Answer from Jdonavan on reddit.com
🌐
Shawnmayzes
shawnmayzes.com › product-engineering › running-claude-code-locally-just-got-easier-with-ollama-code
Running Claude Code Locally Just Got Easier with ollama-code
This repo makes it dead simple to replicate Claude Code’s experience using open models like codellama, deepseek-coder, or codestral, locally via Ollama.
🌐
GreenFlux Blog
blog.greenflux.us › from-prompt-to-production-vibe-coding-local-ai-apps-with-claude-ollama
From Prompt to Production: Vibe Coding Local AI Apps with Claude + Ollama - GreenFlux Blog
In this guide, I’ll show you how you can build and run your own AI-powered apps that work completely offline, and do it without writing a single line of code. I’ll be using my favorite vibe-coding tool, Claude Code, to build an app that connects to local AI models using Ollama.
🌐
Arsturn
arsturn.com › blog › connecting-ollama-and-claude-code-a-step-by-step-guide
Connect Ollama & Claude Code: A Guide for Local LLMs
Learn how to connect local LLMs from Ollama with the powerful Claude Code AI assistant. Our step-by-step guide helps you set up Ollama on Windows.
🌐
Medium
medium.com › @ishu.kumars › from-claude-to-ollama-how-i-hacked-together-an-ai-coding-assistant-in-2-days-with-zero-typescript-712191d6f66e
From Claude to Ollama: How I Hacked Together an AI Coding Assistant in 2 Days (With Zero TypeScript Knowledge) | by Ishu Kumar | Medium
March 8, 2025 - Last week, I stumbled upon an article about Anthropic’s Claude Code CLI. The description immediately caught my attention: an AI coding assistant that lives in your terminal, understands your codebase, and helps tackle routine tasks through natural language. As someone who practically lives in the terminal, this seemed like the perfect fit. > “I needed an AI assistant that could help with code without sending sensitive data to external servers. What if I could combine Ollama’s local inference with the polished interface of Claude Code?”
🌐
CodeMiner42
blog.codeminer42.com › home › posts › setting up a free claude-like assistant with opencode and ollama
Setting Up A Free Claude-Like Assistant With OpenCode And Ollama - The Miners
November 6, 2025 - If you have a powerful machine, you can run the model locally by changing the model name to ollama/qwen3-coder:480b, or the smaller one ollama/qwen3-coder:30b, available for local use.
🌐
DEV Community
dev.to › ishu_kumar › from-claude-to-ollama-building-a-local-ai-coding-assistant-3c46
From Claude to Ollama: Building a Local AI Coding Assistant - DEV Community
March 8, 2025 - I just published an article about adapting a cleanroom implementation of Claude Code to work with Ollama in just 48 hours—with no prior TypeScript experience. When I tried Claude Code, I loved it but burned through a month's token allocation in just three days of development. I needed a similar experience but with local models.
🌐
Reddit
reddit.com › r/localllama › use claudecode with local models
r/LocalLLaMA on Reddit: Use claudecode with local models
July 16, 2025 -

So I have had FOMO on claudecode, but I refuse to give them my prompts or pay $100-$200 a month. So 2 days ago, I saw that moonshot provides an anthropic API to kimi k2 so folks could use it with claude code. Well, many folks are already doing that with local. So if you don't know, now you know. This is how I did it in Linux, should be easy to replicate in OSX or Windows with WSL.

Start your local LLM API

Install claude code

install a proxy - https://github.com/1rgs/claude-code-proxy

Edit the server.py proxy and point it to your OpenAI endpoint, could be llama.cpp, ollama, vllm, whatever you are running.

Add the line above load_dotenv
+litellm.api_base = "http://yokujin:8083/v1" # use your localhost name/IP/ports

Start the proxy according to the docs which will run it in localhost:8082

export ANTHROPIC_BASE_URL=http://localhost:8082

export ANTHROPIC_AUTH_TOKEN="sk-localkey"

run claude code

I just created my first code then decided to post this. I'm running the latest mistral-small-24b on that host. I'm going to be driving it with various models, gemma3-27b, qwen3-32b/235b, deepseekv3 etc

Find elsewhere
🌐
GitHub
github.com › mattlqx › claude-code-ollama-proxy
GitHub - mattlqx/claude-code-ollama-proxy: Run Claude Code on Ollama
... PREFERRED_PROVIDER="ollama" OLLAMA_API_BASE="http://localhost:11434" # Change if using remote Ollama BIG_MODEL="llama3" # For Claude Sonnet SMALL_MODEL="llama3:8b" # For Claude Haiku
Author   mattlqx
🌐
YouTube
youtube.com › watch
Use Use Claude-Code with Ollama with Router Locally - YouTube
This video installs Claude-Code-Router with Ollama, which is used to route Claude Code requests to different models and customize any request.🚀 This video i...
Published   October 3, 2025
🌐
GitHub
github.com › aminhjz › claude-code-ollama-proxy
GitHub - aminhjz/claude-code-ollama-proxy: Run Claude Code on Ollama
... PREFERRED_PROVIDER="ollama" OLLAMA_API_BASE="http://localhost:11434" # Change if using remote Ollama BIG_MODEL="llama3" # For Claude Sonnet SMALL_MODEL="llama3:8b" # For Claude Haiku
Starred by 35 users
Forked by 6 users
Languages   Python
🌐
Shawnmayzes
shawnmayzes.com › product-engineering › running-claude-code-with-local-llm
Running Claude Code with a Local LLM: A Step-by-Step Guide
Learn how to set up Claude Code with a local large language model (LLM) using the code-llmss project. This guide walks you through installation, configuration, and real-world use cases for developers who want AI-powered coding assistance without relying on cloud-based services.
🌐
YouTube
youtube.com › watch
Claude Dev with Ollama - Autonomous Coding Agent - Install Locally - YouTube
This video shows how to install and use Claude Dev with Ollama. Its an autonomous coding agent right in your IDE, capable of creating/editing files, executin...
Published   September 4, 2024
🌐
Reddit
reddit.com › r/ollama › claude code alternative recommendations?
r/ollama on Reddit: Claude Code Alternative Recommendations?
July 27, 2025 -

Hey folks, I'm a self-hosting noob looking for recommendations for good self-hosted/foss/local/private/etc alternative to Claude Code's CLI tool. I recently started using at work and am blown away by how good it is. Would love to have something similar for myself. I have a 12GB VRAM RTX 3060 GPU with Ollama running in a docker container.

I haven't done extensive research to be honest, but I did try searching for a bit in general. I found a tool called Aider that was similar that I tried installing and using. It was okay, not as polished as Claude Code imo (and had a lot of, imo, poor choices for default settings; e.g. auto commit to git and not asking for permission first before editing files).

Anyway, I'm going to keep searching - I've come across a few articles with recommendations but I thought I'd ask here since you folks probably are more in line with my personal philosophy/requirements than some random articles (probably written by some AI itself) recommending tools. Otherwise, I'm going to have to go through these lists and try out the ones that look interesting and potentially liter my system with useless tools lol.

Thanks in advance for any pointers!

🌐
GitHub
github.com › xichen1997 › opencode
GitHub - xichen1997/opencode: opencode is an open source solution for Claude code, which provide ollama integration
OpenCode is a fully-featured, open-source alternative to Claude Code that runs entirely on your local machine using Ollama.
Author   xichen1997
🌐
Ollama
ollama.com › incept5 › llama3.1-claude
incept5/llama3.1-claude
This are Llama3.1 models with Anthropic's Claude Sonnet 3.5 system prompt.
🌐
Arsturn
arsturn.com › blog › understanding-the-differences-between-ollama-and-claude
Understanding the Differences Between Ollama and Claude
Customization: With Ollama, you get the latitude to customize the interaction and behavior of your chosen models, which is a HUGE benefit for brands looking to develop unique identities through their AI. Local Processing: The ability to run models locally means that data privacy can be better maintained since you’re not transferring sensitive data to a remote server. ... Answer 90% of questions faster than a human support rep. Developed by Anthropic, Claude is another heavyweight in the realm of AI language models, built on a foundation of research that prioritizes ETHICS and alignment with human values.
🌐
GitHub
github.com › ollama › ollama
GitHub - ollama/ollama: Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
Cline - Formerly known as Claude Dev is a VS Code extension for multi-file/whole-repo coding · Void (Open source AI code editor and Cursor alternative) Cherry Studio (Desktop client with Ollama support) ConfiChat (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption) ... Local Multimodal AI Chat (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
Starred by 158K users
Forked by 14K users
Languages   Go 52.7% | C 37.2% | TypeScript 5.8% | C++ 2.0% | Objective-C 0.9% | Shell 0.7%
🌐
Orchestre
ccproxy.orchestre.dev › providers › ollama
Ollama (Local Models) - Run AI Models Locally with Claude Code | CCProxy - AI Request Proxy for Claude Code
Run AI models locally on your own hardware with complete privacy. Ollama provides an OpenAI-compatible API that works seamlessly with CCProxy, allowing you to use Claude Code without sending data to external servers.
🌐
Orchestre
ccproxy.orchestre.dev › blog › ollama-claude-code-complete-privacy
Local AI for Complete Privacy: Ollama + Claude Code for Professionals | CCProxy - AI Request Proxy for Claude Code
This guide explores how to use Ollama and Claude Code through CCProxy to get AI assistance that never leaves your device—perfect for professionals who can't afford to risk data exposure. ... Local AI means running artificial intelligence models directly on your own device instead of sending ...