Videos
I am currently spending $10 every day on Claude Sonnet. I am going to be getting a new MacBook Pro in the coming weeks anyways, and was wondering if I could get one with 48+GB of RAM so that I could run local Ollama coding models (in Cline / VS) to save money. Would this work?
Use claudecode with local models
Ollamacode - Local AI assistant that can create, run and understand the task at hand!
how far are we from claudes "computer use" running locally?
Spending a lot on Claude, is it worth running an Ollama model locally instead?
So I have had FOMO on claudecode, but I refuse to give them my prompts or pay $100-$200 a month. So 2 days ago, I saw that moonshot provides an anthropic API to kimi k2 so folks could use it with claude code. Well, many folks are already doing that with local. So if you don't know, now you know. This is how I did it in Linux, should be easy to replicate in OSX or Windows with WSL.
Start your local LLM API
Install claude code
install a proxy - https://github.com/1rgs/claude-code-proxy
Edit the server.py proxy and point it to your OpenAI endpoint, could be llama.cpp, ollama, vllm, whatever you are running.
Add the line above load_dotenv
+litellm.api_base = "http://yokujin:8083/v1" # use your localhost name/IP/ports
Start the proxy according to the docs which will run it in localhost:8082
export ANTHROPIC_BASE_URL=http://localhost:8082
export ANTHROPIC_AUTH_TOKEN="sk-localkey"
run claude code
I just created my first code then decided to post this. I'm running the latest mistral-small-24b on that host. I'm going to be driving it with various models, gemma3-27b, qwen3-32b/235b, deepseekv3 etc
I've been working on a project called OllamaCode, and I'd love to share it with you. It's an AI coding assistant that runs entirely locally with Ollama. The main idea was to create a tool that actually executes the code it writes, rather than just showing you blocks to copy and paste.
Here are a few things I've focused on:
It can create and run files automatically from natural language.
I've tried to make it smart about executing tools like git, search, and bash commands.
It's designed to work with any Ollama model that supports function calling.
A big priority for me was to keep it 100% local to ensure privacy.
It's still in the very early days, and there's a lot I still want to improve. It's been really helpful for my own workflow, and I would be incredibly grateful for any feedback from the community to help make it better.
Repo:https://github.com/tooyipjee/ollamacode
claude has a "computer use" demo that can interact with a desktop PC and click stuff.
the code looks like its just sending screenshots to their api and getting cursor positions back.
i cant imagine that's doable with a visual classification model like llava etc, since those don't actually know exact pixel positions within an image. there's something else going on before or after its fed into a visual model. maybe each element is isolated using filters and then classified?
Does anyone know how this stuff works or maybe even an existing open source project that is trying to build this on top of the ollama visual api?