🌐
Shawnmayzes
shawnmayzes.com › product-engineering › running-claude-code-locally-just-got-easier-with-ollama-code
Running Claude Code Locally Just Got Easier with ollama-code
A lightweight wrapper around Ollama that mimics Claude Code's development experience using open models like codellama, deepseek-coder, or codestral locally.
🌐
GitHub
github.com › aminhjz › claude-code-ollama-proxy
GitHub - aminhjz/claude-code-ollama-proxy: Run Claude Code on Ollama
Run Claude Code on Ollama. Contribute to aminhjz/claude-code-ollama-proxy development by creating an account on GitHub.
Starred by 35 users
Forked by 6 users
Languages   Python
🌐
GitHub
github.com › ollama › ollama
GitHub - ollama/ollama: Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
ollama-co2 (FastAPI web interface for monitoring and managing local and remote Ollama servers with real-time model monitoring and concurrent downloads) Hillnote (A Markdown-first workspace designed to supercharge your AI workflow. Create documents ready to integrate with Claude, ChatGPT, Gemini, Cursor, and more - all while keeping your work on your device.)
Starred by 158K users
Forked by 14K users
Languages   Go 52.7% | C 37.2% | TypeScript 5.8% | C++ 2.0% | Objective-C 0.9% | Shell 0.7%
🌐
GitHub
github.com › mattlqx › claude-code-ollama-proxy
GitHub - mattlqx/claude-code-ollama-proxy: Run Claude Code on Ollama
Run Claude Code on Ollama. Contribute to mattlqx/claude-code-ollama-proxy development by creating an account on GitHub.
Author   mattlqx
🌐
GitHub
github.com › xichen1997 › opencode
GitHub - xichen1997/opencode: opencode is an open source solution for Claude code, which provide ollama integration
OpenCode is a fully-featured, open-source alternative to Claude Code that runs entirely on your local machine using Ollama.
Author   xichen1997
🌐
GitHub
github.com › Jadael › OllamaClaude
GitHub - Jadael/OllamaClaude: An MCP server which allows Claude Code to use a local Ollama server for tasks.
This MCP (Model Context Protocol) server integrates your local Ollama instance with Claude Code, allowing Claude to delegate coding tasks to your local models (Gemma3, Mistral, etc.) to minimize API token usage. Claude Code acts as an orchestrator, calling tools provided by this MCP server. The tools run on your local Ollama instance, and Claude reviews/refines the results as needed.
Author   Jadael
🌐
Reddit
reddit.com › r/localllama › use claudecode with local models
r/LocalLLaMA on Reddit: Use claudecode with local models
July 16, 2025 -

So I have had FOMO on claudecode, but I refuse to give them my prompts or pay $100-$200 a month. So 2 days ago, I saw that moonshot provides an anthropic API to kimi k2 so folks could use it with claude code. Well, many folks are already doing that with local. So if you don't know, now you know. This is how I did it in Linux, should be easy to replicate in OSX or Windows with WSL.

Start your local LLM API

Install claude code

install a proxy - https://github.com/1rgs/claude-code-proxy

Edit the server.py proxy and point it to your OpenAI endpoint, could be llama.cpp, ollama, vllm, whatever you are running.

Add the line above load_dotenv
+litellm.api_base = "http://yokujin:8083/v1" # use your localhost name/IP/ports

Start the proxy according to the docs which will run it in localhost:8082

export ANTHROPIC_BASE_URL=http://localhost:8082

export ANTHROPIC_AUTH_TOKEN="sk-localkey"

run claude code

I just created my first code then decided to post this. I'm running the latest mistral-small-24b on that host. I'm going to be driving it with various models, gemma3-27b, qwen3-32b/235b, deepseekv3 etc

🌐
Medium
medium.com › @ishu.kumars › from-claude-to-ollama-how-i-hacked-together-an-ai-coding-assistant-in-2-days-with-zero-typescript-712191d6f66e
From Claude to Ollama: How I Hacked Together an AI Coding Assistant in 2 Days (With Zero TypeScript Knowledge) | by Ishu Kumar | Medium
March 8, 2025 - After exploring the code, I realized it was primarily designed for the Claude API. But I had been experimenting with [Ollama](https://ollama.ai/) for running local LLMs, and I wondered — could I adapt this cleanroom implementation to work with Ollama instead?
Find elsewhere
🌐
GreenFlux Blog
blog.greenflux.us › from-prompt-to-production-vibe-coding-local-ai-apps-with-claude-ollama
From Prompt to Production: Vibe Coding Local AI Apps with Claude + Ollama - GreenFlux Blog
In this guide, I’ll show you how you can build and run your own AI-powered apps that work completely offline, and do it without writing a single line of code. I’ll be using my favorite vibe-coding tool, Claude Code, to build an app that connects to local AI models using Ollama.
🌐
GitHub
github.com › andrewbrereton › claude-sidekick
GitHub - andrewbrereton/claude-sidekick: MCP server that connects Claude to local Ollama models, delegating simple tasks to save tokens for complex reasoning
# Pre-load models to keep them ... token efficiency while still getting comprehensive results. This server runs locally and doesn't send data externally...
Author   andrewbrereton
🌐
Shawnmayzes
shawnmayzes.com › product-engineering › running-claude-code-with-local-llm
Running Claude Code with a Local LLM: A Step-by-Step Guide
Running Claude Code with a local ... and start experimenting with its full capabilities. For more details, check out code-llmss on GitHub....
🌐
Reddit
reddit.com › r/ollama › ollamacode - local ai assistant that can create, run and understand the task at hand!
r/ollama on Reddit: Ollamacode - Local AI assistant that can create, run and understand the task at hand!
June 28, 2025 -

I've been working on a project called OllamaCode, and I'd love to share it with you. It's an AI coding assistant that runs entirely locally with Ollama. The main idea was to create a tool that actually executes the code it writes, rather than just showing you blocks to copy and paste.

Here are a few things I've focused on:

  • It can create and run files automatically from natural language.

  • I've tried to make it smart about executing tools like git, search, and bash commands.

  • It's designed to work with any Ollama model that supports function calling.

  • A big priority for me was to keep it 100% local to ensure privacy.

It's still in the very early days, and there's a lot I still want to improve. It's been really helpful for my own workflow, and I would be incredibly grateful for any feedback from the community to help make it better.

Repo:https://github.com/tooyipjee/ollamacode

🌐
Arsturn
arsturn.com › blog › connecting-ollama-and-claude-code-a-step-by-step-guide
Connect Ollama & Claude Code: A Guide for Local LLMs
Learn how to connect local LLMs from Ollama with the powerful Claude Code AI assistant. Our step-by-step guide helps you set up Ollama on Windows.
🌐
CodeMiner42
blog.codeminer42.com › home › posts › setting up a free claude-like assistant with opencode and ollama
Setting Up A Free Claude-Like Assistant With OpenCode And Ollama - The Miners
November 6, 2025 - In this case, just select "Anthropic" as the provider, and put the API key. After doing that, I could see the Anthropic models inside of OpenCode without changing the openconfig.json.
🌐
GitHub
github.com › Foadsf › ollama-code
GitHub - Foadsf/ollama-code: Terminal-based AI coding assistant powered by Ollama - a FLOSS alternative to Claude Code that keeps your code local
A FLOSS terminal-based AI coding assistant powered by Ollama. olc is a lightweight alternative to Claude Code that runs locally using Ollama models.
Starred by 14 users
Forked by 4 users
Languages   JavaScript 99.9% | Batchfile 0.1%
🌐
Reddit
reddit.com › r/ollama › how far are we from claudes "computer use" running locally?
r/ollama on Reddit: how far are we from claudes "computer use" running locally?
January 3, 2025 -

claude has a "computer use" demo that can interact with a desktop PC and click stuff.
the code looks like its just sending screenshots to their api and getting cursor positions back.

i cant imagine that's doable with a visual classification model like llava etc, since those don't actually know exact pixel positions within an image. there's something else going on before or after its fed into a visual model. maybe each element is isolated using filters and then classified?

Does anyone know how this stuff works or maybe even an existing open source project that is trying to build this on top of the ollama visual api?

🌐
Reddit
reddit.com › r/localllama › claude-dev now with local llm support! (ollama, openai compatible servers)
r/LocalLLaMA on Reddit: Claude-Dev Now With Local LLM support! (Ollama, OpenAI Compatible Servers)
May 25, 2024 - Aider is best, then Claude Dev it seems. ... Ohhhh how much I hate those "make a snake game" "tests", grrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr!!!! Don't know the project(yet), looks very interesting! but that first gif on their github page, uff.. Wanna showcase some agentic functionality - git clone the vscode repo(or simillar) and tell your agent to implement something thats not a CS101 exercise ... And how can I use Ollama...
🌐
GitHub
github.com › anthropics › claude-code › issues › 7178
✨ Feature Request: Support for Self-Hosted LLMs in Claude Code Harness · Issue #7178 · anthropics/claude-code
September 4, 2025 - 🚀 Feature Request Enable Claude Code to integrate with self-hosted LLMs, allowing users to swap out Claude’s proprietary models for locally run or open-source models (e.g., via OpenAI-compatible APIs, vLLM, LM Studio, Ollama, etc.). 🧠 Ra...
Published   Sep 04, 2025
🌐
GitHub
github.com › coleam00 › bolt.new-any-llm › issues › 423
Ollama provider has default model set to 'claude-3-5-sonnet-latest' ...
November 26, 2024 - Describe the bug When selecting Ollama as the provider and entering a prompt, the request fails, looking at the console output Bolt is trying to use the model 'claude-3-5-sonnet-latest' which... is not a model you can run on Ollama 😅 Loo...
Published   Nov 26, 2024