If what you want to do can be done with llama then it’s not serious enough for you be spending “lots” of money on Claude. Claude is cheap. Answer from Jdonavan on reddit.com
🌐
Shawnmayzes
shawnmayzes.com › product-engineering › running-claude-code-locally-just-got-easier-with-ollama-code
Running Claude Code Locally Just Got Easier with ollama-code
This repo makes it dead simple to replicate Claude Code’s experience using open models like codellama, deepseek-coder, or codestral, locally via Ollama.
🌐
GreenFlux Blog
blog.greenflux.us › from-prompt-to-production-vibe-coding-local-ai-apps-with-claude-ollama
From Prompt to Production: Vibe Coding Local AI Apps with Claude + Ollama - GreenFlux Blog
In this guide, I’ll show you how you can build and run your own AI-powered apps that work completely offline, and do it without writing a single line of code. I’ll be using my favorite vibe-coding tool, Claude Code, to build an app that connects to local AI models using Ollama.
🌐
Arsturn
arsturn.com › blog › connecting-ollama-and-claude-code-a-step-by-step-guide
Connect Ollama & Claude Code: A Guide for Local LLMs
Learn how to connect local LLMs from Ollama with the powerful Claude Code AI assistant. Our step-by-step guide helps you set up Ollama on Windows.
🌐
DEV Community
dev.to › ishu_kumar › from-claude-to-ollama-building-a-local-ai-coding-assistant-3c46
From Claude to Ollama: Building a Local AI Coding Assistant - DEV Community
March 8, 2025 - I just published an article about adapting a cleanroom implementation of Claude Code to work with Ollama in just 48 hours—with no prior TypeScript experience. When I tried Claude Code, I loved it but burned through a month's token allocation in just three days of development. I needed a similar experience but with local models.
🌐
Medium
medium.com › @ishu.kumars › from-claude-to-ollama-how-i-hacked-together-an-ai-coding-assistant-in-2-days-with-zero-typescript-712191d6f66e
From Claude to Ollama: How I Hacked Together an AI Coding Assistant in 2 Days (With Zero TypeScript Knowledge) | by Ishu Kumar | Medium
March 8, 2025 - Last week, I stumbled upon an article about Anthropic’s Claude Code CLI. The description immediately caught my attention: an AI coding assistant that lives in your terminal, understands your codebase, and helps tackle routine tasks through natural language. As someone who practically lives in the terminal, this seemed like the perfect fit. > “I needed an AI assistant that could help with code without sending sensitive data to external servers. What if I could combine Ollama’s local inference with the polished interface of Claude Code?”
🌐
CodeMiner42
blog.codeminer42.com › home › posts › setting up a free claude-like assistant with opencode and ollama
Setting Up A Free Claude-Like Assistant With OpenCode And Ollama - The Miners
November 6, 2025 - If you have a powerful machine, you can run the model locally by changing the model name to ollama/qwen3-coder:480b, or the smaller one ollama/qwen3-coder:30b, available for local use.
🌐
Reddit
reddit.com › r/localllama › use claudecode with local models
r/LocalLLaMA on Reddit: Use claudecode with local models
July 16, 2025 -

So I have had FOMO on claudecode, but I refuse to give them my prompts or pay $100-$200 a month. So 2 days ago, I saw that moonshot provides an anthropic API to kimi k2 so folks could use it with claude code. Well, many folks are already doing that with local. So if you don't know, now you know. This is how I did it in Linux, should be easy to replicate in OSX or Windows with WSL.

Start your local LLM API

Install claude code

install a proxy - https://github.com/1rgs/claude-code-proxy

Edit the server.py proxy and point it to your OpenAI endpoint, could be llama.cpp, ollama, vllm, whatever you are running.

Add the line above load_dotenv
+litellm.api_base = "http://yokujin:8083/v1" # use your localhost name/IP/ports

Start the proxy according to the docs which will run it in localhost:8082

export ANTHROPIC_BASE_URL=http://localhost:8082

export ANTHROPIC_AUTH_TOKEN="sk-localkey"

run claude code

I just created my first code then decided to post this. I'm running the latest mistral-small-24b on that host. I'm going to be driving it with various models, gemma3-27b, qwen3-32b/235b, deepseekv3 etc

Find elsewhere
🌐
GitHub
github.com › mattlqx › claude-code-ollama-proxy
GitHub - mattlqx/claude-code-ollama-proxy: Run Claude Code on Ollama
... PREFERRED_PROVIDER="ollama" OLLAMA_API_BASE="http://localhost:11434" # Change if using remote Ollama BIG_MODEL="llama3" # For Claude Sonnet SMALL_MODEL="llama3:8b" # For Claude Haiku
Author   mattlqx
🌐
Shawnmayzes
shawnmayzes.com › product-engineering › running-claude-code-with-local-llm
Running Claude Code with a Local LLM: A Step-by-Step Guide
Learn how to set up Claude Code with a local large language model (LLM) using the code-llmss project. This guide walks you through installation, configuration, and real-world use cases for developers who want AI-powered coding assistance without relying on cloud-based services.
🌐
GitHub
github.com › aminhjz › claude-code-ollama-proxy
GitHub - aminhjz/claude-code-ollama-proxy: Run Claude Code on Ollama
... PREFERRED_PROVIDER="ollama" OLLAMA_API_BASE="http://localhost:11434" # Change if using remote Ollama BIG_MODEL="llama3" # For Claude Sonnet SMALL_MODEL="llama3:8b" # For Claude Haiku
Starred by 35 users
Forked by 6 users
Languages   Python
🌐
YouTube
youtube.com › watch
Claude Dev with Ollama - Autonomous Coding Agent - Install Locally - YouTube
This video shows how to install and use Claude Dev with Ollama. Its an autonomous coding agent right in your IDE, capable of creating/editing files, executin...
Published   September 4, 2024
🌐
YouTube
youtube.com › watch
Use Use Claude-Code with Ollama with Router Locally - YouTube
This video installs Claude-Code-Router with Ollama, which is used to route Claude Code requests to different models and customize any request.🚀 This video i...
Published   October 3, 2025
🌐
Reddit
reddit.com › r/ollama › ollamacode - local ai assistant that can create, run and understand the task at hand!
r/ollama on Reddit: Ollamacode - Local AI assistant that can create, run and understand the task at hand!
June 28, 2025 -

I've been working on a project called OllamaCode, and I'd love to share it with you. It's an AI coding assistant that runs entirely locally with Ollama. The main idea was to create a tool that actually executes the code it writes, rather than just showing you blocks to copy and paste.

Here are a few things I've focused on:

  • It can create and run files automatically from natural language.

  • I've tried to make it smart about executing tools like git, search, and bash commands.

  • It's designed to work with any Ollama model that supports function calling.

  • A big priority for me was to keep it 100% local to ensure privacy.

It's still in the very early days, and there's a lot I still want to improve. It's been really helpful for my own workflow, and I would be incredibly grateful for any feedback from the community to help make it better.

Repo:https://github.com/tooyipjee/ollamacode

🌐
GitHub
github.com › xichen1997 › opencode
GitHub - xichen1997/opencode: opencode is an open source solution for Claude code, which provide ollama integration
OpenCode is a fully-featured, open-source alternative to Claude Code that runs entirely on your local machine using Ollama.
Author   xichen1997
🌐
Reddit
reddit.com › r/ollama › claude code alternative recommendations?
r/ollama on Reddit: Claude Code Alternative Recommendations?
July 27, 2025 -

Hey folks, I'm a self-hosting noob looking for recommendations for good self-hosted/foss/local/private/etc alternative to Claude Code's CLI tool. I recently started using at work and am blown away by how good it is. Would love to have something similar for myself. I have a 12GB VRAM RTX 3060 GPU with Ollama running in a docker container.

I haven't done extensive research to be honest, but I did try searching for a bit in general. I found a tool called Aider that was similar that I tried installing and using. It was okay, not as polished as Claude Code imo (and had a lot of, imo, poor choices for default settings; e.g. auto commit to git and not asking for permission first before editing files).

Anyway, I'm going to keep searching - I've come across a few articles with recommendations but I thought I'd ask here since you folks probably are more in line with my personal philosophy/requirements than some random articles (probably written by some AI itself) recommending tools. Otherwise, I'm going to have to go through these lists and try out the ones that look interesting and potentially liter my system with useless tools lol.

Thanks in advance for any pointers!

🌐
Ollama
ollama.com › incept5 › llama3.1-claude
incept5/llama3.1-claude
This are Llama3.1 models with Anthropic's Claude Sonnet 3.5 system prompt.
🌐
Orchestre
ccproxy.orchestre.dev › blog › ollama-claude-code-complete-privacy
Local AI for Complete Privacy: Ollama + Claude Code for Professionals | CCProxy - AI Request Proxy for Claude Code
This guide explores how to use Ollama and Claude Code through CCProxy to get AI assistance that never leaves your device—perfect for professionals who can't afford to risk data exposure. ... Local AI means running artificial intelligence models directly on your own device instead of sending ...
🌐
GitHub
github.com › ollama › ollama
GitHub - ollama/ollama: Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
Cline - Formerly known as Claude Dev is a VS Code extension for multi-file/whole-repo coding · Void (Open source AI code editor and Cursor alternative) Cherry Studio (Desktop client with Ollama support) ConfiChat (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption) ... Local Multimodal AI Chat (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
Starred by 158K users
Forked by 14K users
Languages   Go 52.7% | C 37.2% | TypeScript 5.8% | C++ 2.0% | Objective-C 0.9% | Shell 0.7%
🌐
Reddit
reddit.com › r/ollama › how far are we from claudes "computer use" running locally?
r/ollama on Reddit: how far are we from claudes "computer use" running locally?
January 3, 2025 -

claude has a "computer use" demo that can interact with a desktop PC and click stuff.
the code looks like its just sending screenshots to their api and getting cursor positions back.

i cant imagine that's doable with a visual classification model like llava etc, since those don't actually know exact pixel positions within an image. there's something else going on before or after its fed into a visual model. maybe each element is isolated using filters and then classified?

Does anyone know how this stuff works or maybe even an existing open source project that is trying to build this on top of the ollama visual api?