Videos
How does Claude Desktop work with the Chrome extension?
How do I connect Claude to my tools?
What’s the difference between the desktop app and using Claude in my browser?
Hey everyone,
Since Claude Code has been around for a while now and many of us are already familiar with Claude Sonnet 3.7, I wanted to share a quick step-by-step guide for those who haven’t had time to explore it yet.
This guide sums up everything you need to know about Claude Code, including:
How to install and set it up
The benefits and when to use it
A demo of its capabilities in action
Some Claude Code essential commands
I think Claude Code is a better alternative to coding assistants like Cursor and Bolt, especially for developers who want an AI that really understands the entire codebase instead of just suggesting lines.
https://medium.com/p/how-to-install-and-use-claude-code-the-new-agentic-coding-tool-d03fd7f677bc?source=social.tw
Hello, while researching the topic of content creation automation with LLMs, I stumbled upon this video https://www.youtube.com/watch?v=Qpgz1-Gjl_I
What caught my interest are the incredible capabilities of Claude.ai. I mean it is capable of creating HTML documents. I did the same with a local LLaMa 7b instruct, so no biggie. Where things start to go awry with LLaMa is when I ask for the infographic using SVG icons and even more for the interactive timeline. There is no way LLaMa creates a JS script, you must ask very persistently and even then the script simply doesn't work.
Also it was fun to see LLaMa write all the document in HTML but adding a reference section written in markdown. I pointed it out to the model and it said it was sorry, then corrected the mistake and transformed the markdown in HTML. I wonder why it made such a mistake.
However it looks like Claude.ai is capable of much more complex reasoning.
At this point I wonder if it is because Claude is a tens of billions parameters model, while the LLaMa I am using is just a 7b one. Or if there are fundamental differences at the level of architecture and training. Or maybe the 200k token context window plays a role? I am running LLaMa through Ollama, so I am using moderate settings.
I have even tried a couple of LLaMa derived models with similar results. I played with CodeQwen and it shows it isn't made to write articles.
So, anyone knowledgeable and with a bit of experience in using the various LLMs could help me find the needle in this haystack?
p.s. I wonder if all the various opensource LLMs out there are based on LLaMa, or if there are non LLaMa ones too!