Videos
How do I connect Claude to my tools?
How does Claude Desktop work with the Chrome extension?
What’s the difference between the desktop app and using Claude in my browser?
Any LLM including claude that can be used offline and run locally on your PC that you can feed your personal notes and books to?
I'm on Mac OS and installed using the script.
# Install stable version (default) curl -fsSL https://claude.ai/install.sh | bash
I then add the dir to the PATH: `PATH="~/.local/bin:$PATH"`
I can confirm I see the file at `~/.local/bin/claude`
Then I'll startup `claude`, use for about an hour then when I exit and retry `claude` command, the file at `~/.local/bin/claude` does not exist.
I `ls ~/.local/bin/claude` and there's no file there.
I would then reinstall using the curl command and the same thing happens when I exit Claude Code.
Is this happening to anyone else or can anyone help? Apologies if this has been solved but i didn't find any answer with Googling.
Hello, while researching the topic of content creation automation with LLMs, I stumbled upon this video https://www.youtube.com/watch?v=Qpgz1-Gjl_I
What caught my interest are the incredible capabilities of Claude.ai. I mean it is capable of creating HTML documents. I did the same with a local LLaMa 7b instruct, so no biggie. Where things start to go awry with LLaMa is when I ask for the infographic using SVG icons and even more for the interactive timeline. There is no way LLaMa creates a JS script, you must ask very persistently and even then the script simply doesn't work.
Also it was fun to see LLaMa write all the document in HTML but adding a reference section written in markdown. I pointed it out to the model and it said it was sorry, then corrected the mistake and transformed the markdown in HTML. I wonder why it made such a mistake.
However it looks like Claude.ai is capable of much more complex reasoning.
At this point I wonder if it is because Claude is a tens of billions parameters model, while the LLaMa I am using is just a 7b one. Or if there are fundamental differences at the level of architecture and training. Or maybe the 200k token context window plays a role? I am running LLaMa through Ollama, so I am using moderate settings.
I have even tried a couple of LLaMa derived models with similar results. I played with CodeQwen and it shows it isn't made to write articles.
So, anyone knowledgeable and with a bit of experience in using the various LLMs could help me find the needle in this haystack?
p.s. I wonder if all the various opensource LLMs out there are based on LLaMa, or if there are non LLaMa ones too!