I have been using Claude for the last two days and I'm impressed! Thinking of buying the Pro subscription, but before that I need to confirm whether Claude is the best AI Code Assist available right now.
I feel like I read more about how amazing it is and less about what people have made with it. Any interesting working projects that were built with the help of claude code? The hype seems real but I barely see any actual evidence of things people have made.
Videos
I've been pair programming with AI coding tools daily for 8 months writing literally over 100k lines of in production code. The biggest time-waster? When claude code thinks it knows enough to begin. So I built a requirements gathering system (completely free and fully open sourced) that forces claude to actually understand what you want utilizing claude /slash commands.
The Problem Everyone Has:
You: "Add user avatars"
AI: builds entire authentication system from scratch
You: "No, we already have auth, just add avatars to existing users"
AI: rewrites your database schema
You: screams internally and breaks things
What I Built: A /slash command requirements system where Claude code treats you as the product manager that you are. No more essays. No more mind-reading.
How It Actually Works:
You:
/requirements-start {Arguement like "add user avatar upload}AI analyzes your codebase structure systematically (tech stack, patterns, architecture)
AI asks the top 5 most pressing discovery questions like "Will users interact through a visual interface? (Default: YES)"
AI autonomously searches and reads relevant files based on your answers
AI documents what it found: exact files, patterns, similar features
AI asks the top 5 most clarifying questions like "Should avatars appear in search results? (Default: YES - consistent with profile photos)"
You get a requirements doc with specific file paths and implementation patterns
The Special Sauce:
Smart defaults on every question - just say "idk" and it picks the sensible option
AI reads your code before asking - lets be real, claude.md can only do so much
Product managers can answer - Unless you're deep down in the weeds of your code, claude code will intelligently use what already exists instead of trying to invent new ways of doing it.
Links directly to implementation - requirements reference exact files so another ai can pick up where you left off with a simple /req... selection
Controversial take: Coding has become a steering game. Not a babysitting one. Create the right systems and let claude code do the heavy lifting.
Full repo with commands and examples and how to install (no gate but would appreciate a start if it helped you): github.com/rizethereum/claude-code-requirements-builder
Special shout out: This works best with https://repoprompt.com/ codemaps, search, and batch read mcp tools but can work with out them.
Since ChatGPT came out about a year ago the way I code, but also my productivity and code output has changed drastically. I write a lot more prompts than lines of code themselves and the amount of progress Iโm able to make by the end of the end of the day is magnitudes higher. I truly believe that anyone not using these tools to code is a lot less efficient and will fall behind.
A little bit o context: Iโm a full stack developer. Code mostly in React and flaks in the backend.
My AI tools stack:
Claude Opus (Claude Chat interface/ sometimes use it through the api when I hit the daily limit)
In my experience and for the type of coding I do, Claude Opus has always performed better than ChatGPT for me. The difference is significant (not drastic, but definitely significant if youโre coding a lot).
GitHub Copilot
For 98% of my code generation and debugging Iโm using Claude, but I still find it worth it to have Copilot for the autocompletions when making small changes inside a file for example where a writing a Claude prompt just for that would be overkilled.
I donโt use any of the hyped up vsCode extensions or special ai code editors that generate code inside the code editorโs files. The reason is simple. The majority of times I prompt an LLM for a code snippet, I wonโt get the exact output I want on the first try. It of takes more than one prompt to get what Iโm looking for. For the follow up piece of code that I need to get, having the context of the previous conversation is key. So a complete chat interface with message history is so much more useful than being able to generate code inside of the file. Iโve tried many of these ai coding extensions for vsCode and the Cursor code editor and none of them have been very useful. I always go back to the separate chat interface ChatGPT/Claude have.
Prompt engineering
Vague instructions will product vague output from the llm. The simplest and most efficient way to get the piece of code youโre looking for is to provide a similar example (for example, a react component thatโs already in the style/format you want).
There will be prompts that youโll use repeatedly. For example, the one I use the most:
Respond with code only in CODE SNIPPET format, no explanations
Most of the times when generating code on the fly you donโt need all those lengthy explanations the llm provides before/after the code snippets. Without extra text explanation the response is generated faster and you save time.
Other ones I use:
Just provide the parts that need to be modified
Provide entire updated component
Iโve the prompts/mini instructions I use saved the most in a custom chrome extension so I can insert them with keyboard shortcuts ( / + a letter). I also added custom keyboard shortcuts to the Claude user interface for creating new chat, new chat in new window, etc etc.
Some of the changes might sound small but when youโre coding every they, they stack up and save you so much time. Would love to hear what everyone else has been implementing to take llm coding efficiency to another level.
Claude Code just feels different. It's the only setup where the best coding model and the product are tightly integrated. "Taste" is thrown around a lot these days, but the UX here genuinely earns it: minimalist, surfaces just the right information at the right time, never overwhelms you.
Cursor can't match it because its harness bends around wildly different models, so even the same model doesn't perform as well there.
Gemini 3 Pro overthinks everything, and Gemini CLI is just a worse product. I'd bet far fewer Google engineers use it compared to Anthropic employees "antfooding" Claude Code.
Codex (GPT-5.1 Codex Max) is a powerful sledgehammer and amazing value at 20$ but too slow for real agentic loops where you need quick tool calls and tight back-and-forth. In my experience, it also gets stuck more often.
Claude Code with Opus 4.5 is the premium developer experience right now. As the makers of CC put it in this interview, you can tell it's built by people who use it every day and are laser focused on winning the "premium" developer market.
I haven't tried Opencode or Factory Droid yet though. Anyone else try them and prefer them to CC?
I have been using Gemini 2.5 pro preview 05-06 and using the free credits because imma brokie and I have been having problems at coding that now matter what I do I can't solve and gets stuck so I ask Gemini to give me the problem of the summary paste it to Claude sonnet 4 chat and BOOM! it solves it in 1 go! And this happened already 3 times with no fail it's just makes me wish I can afford Claude but will just have to make do what I can afford for now. :)
Hi friends,
I often need Claude to generate extensively long code for my python coding, sometimes reaching 1,000โ1,500 lines. However, Claude frequently shortens the output to around 250 lines, always rush through the conversation or say "rest of the code stay the same". Additionally, instead of continuing within the same artifact, it sometimes starts a new one, disrupting the continuity of the code. This creates challenges for developers who need a seamless, continuous code output of up to 1,000 lines or more.
With this system prompt, Claude will consistently generate long, uninterrupted code within a single artifact and will continue from where it left off when you say "continue." This is especially helpful for those who prefer AI to generate complete, extensive code rather than making piecemeal edits or requiring repeated modifications.
My assumption about why this works is that even though Anthropic has this line in their system prompt "
6. Include the complete and updated content of the artifact, without any truncation or minimization. Don't use "// rest of the code remains the same..."."
Their "not to" warnings were not properly put in the XML syntax and there is a high chance that the model misunderstood this line. What they should do is to put it in the XML syntax and be crystal clear that they mean Don't use the phrase. Otherwise "// rest of the code remains the same..."." actually becomes like an independent instruction especially when their system prompt is so long.
If you find this helpful, please consider giving my small GitHub channel a โญโIโd really appreciate it!
https://github.com/jzou19957/SuperClaudeCodePrompt/tree/main
<Universal_System_Prompt_For_Full_Continuous_Code_Output>
<Purpose>Ensure all code requests are delivered in one single artifact, without abbreviation, omission, or placeholders.</Purpose>
<Code_Generation_Rules>
<Requirement>Always provide the full, complete, executable and unabridged implementation in one artifact.</Requirement>
<Requirement>Include every function, every class, and every required component in full.</Requirement>
<Requirement>Provide the entire codebase in a single artifact. Do not split it across multiple responses.</Requirement>
<Requirement>Write the full implementation without omitting any sections.</Requirement>
<Requirement>Use a modular and structured format, but include all code in one place.</Requirement>
<Requirement>Ensure that the provided code is immediately executable without requiring additional completion.</Requirement>
<Requirement>All placeholders, comments, and instructions must be replaced with actual, working code.</Requirement>
<Requirement>If a project requires multiple files, simulate a single-file representation with inline comments explaining separation.</Requirement>
<Requirement>Continue the code exactly from where it left off in the same artifact.</Requirement>
</Code_Generation_Rules>
<Strict_Prohibitions>
<DoNotUse>โ...rest of the code remains the same.โ</DoNotUse>
<DoNotUse>Summarizing or omitting any function, event handler, or logic.</DoNotUse>
<DoNotUse>Generating partial code requiring user expansion.</DoNotUse>
<DoNotUse>Assuming the user will "fill in the gaps"โevery detail must be included.</DoNotUse>
<DoNotUse>Splitting the code across responses.</DoNotUse>
</Strict_Prohibitions>
<Execution_Requirement>
<Instruction>The generated code must be complete, standalone, and executable as-is.</Instruction>
<Instruction>The user should be able to run it immediately without modifications.</Instruction>
</Execution_Requirement>
</Universal_System_Prompt_For_Full_Continuous_Code_Output>Using Cursor & Windsurf with Claude Sonnet, I built a NodeJS & MongoDB project - as a technical person.
1- Start with structure, not code
The most important step is setting up a clear project structure. Don't even think about writing code yet.
2- Chat VS agent tabs
I use the chat tab for brainstorming/research and the agent tab for writing actual code.
3- Customize your AI as you go
Create "Rules for AI" custom instructions to modify your agent's behavior as you progress, or maintain a RulesForAI.md file.
4- Break down complex problems
Don't just say "Extract text from PDF and generate a summary." That's two problems! Extract text first, then generate the summary. Solve one problem at a time.
5- Brainstorm before coding
Share your thoughts with AI about tackling the problem. Once its solution steps look good, then ask it to write code.
6- File naming and modularity matter
Since tools like Cursor/Windsurf don't include all files in context (to reduce their costs), accurate file naming prevents code duplication. Make sure filenames clearly describe their responsibility.
7- Always write tests
It might feel unnecessary when your project is small, but when it grows, tests will be your hero.
8- Commit often!
If you don't, you will lose 4 months of work like this guy [Reddit post]
9- Keep chats focused
When you want to solve a new problem, start a new chat.
10- Don't just accept working code
It's tempting to just accept code that works and move on. But there will be times when AI can't fix your bugs - that's when your hands need to get dirty (main reason non-tech people still need developers).
11- AI struggles with new tech.
When I tried integrating a new payment gateway, it hallucinated. But once I provided docs, it got it right.
12- Getting unstuck
If AI can't find the problem in the code and is stuck in a loop, ask it to insert debugging statements. AI is excellent at debugging, but sometimes needs your help to point it in the right direction.
While I don't recommend having AI generate 100% of your codebase, it's good to go through a similar experience on a side project, you will learn practically how to utilize AI efficiently.
* It was a training project, not a useful product.
EDIT 0: when I posted this a week ago on LinkedIn I got ~400 impressions, I felt it was meh content, THANK YOU so much for your support, now I have a motive to write more lessons and dig much deeper in each one, please connect with me on LinkedIn
EDIT 1: I created this GitHub repository "AI-Assisted Development Guide" as a reference and guide to newcomers after this post reached 500,000 views in 24 hours, I expanded these lessons a bit more, your contributions are welcome!
Don't forget to give a star โญ
Our team has been using Claude Code as our primary AI coding assistant for the past 6+ months, along with Cursor/Copilot. Claude Code is genuinely impressive at generating end-to-end features, but we noticed something unexpected: our development velocity hasn't actually improved.
I analyzed where the bottleneck went and wrote up my findings here.
The Core Issue:
Claude Code (and other AI assistants) shifted the bottleneck from writing code to understanding and reviewing it:
What changed:
Claude generates 500 lines of clean, working code in minutes.
But you still need to deeply understand every line (you're responsible for it)
Both you and your peer reviewer are learning the code.
Review time scales exponentially with change size
Understanding code you didn't write takes 2-3x longer than writing it yourself
Iโm a software engineer with almost 15 years of experience, and I fell in love with coding exactly because it allows me to do things that do things for meโin other words, I love to automate things.
So Claude Code (and AI agents in general) was a huge leap for my workflow.
But the agents have some limitations: they lose context, and they always try to economize tokens.
This creates a productivity paradox: AI tools that save time writing code but waste time managing the process.
I found myself wasting more time reviewing and prompting again and again than actually coding myself.
After some time, I developed a workflow.
Basically:
Step 0 - Generate clarification questions and initial branch setup
Step 1 - Generate refined PROMPT.md
Step 2 - Decompose task into small sub-tasks
Step 3 - Analyze dependencies and create execution plan (DAG)
Step 4 - Generate detailed TODO.md for each task
Step 5 - Execute task (research โ context โ implementation)
Step 6 - Code review for each task
Step 7 - Global critical bug sweep across ALL changes
Step 8 - Final commit and pull request creation
So after doing this workflow again and again, I realized: why not automate this workflow too?
So Claudiomiro was born:
https://github.com/samuelfaj/claudiomiro
Claudiomiro solves this by creating a truly autonomous development workflow that handles complex tasks from start to finish. It's not just another code generator โ it's a complete development automation.
You can put AI to work 100% autonomously for you.
Hope you guys like it!
Iโve been using the free versions of both Claude and ChatGPT for text-to-code generation (complex tasks) and found that I prefer Claude because its code structure is more organized and effective. Iโm now considering purchasing a paid plan for one of them.
For those who have upgraded to Claude Pro or ChatGPT Plus, which one do you think performs better for coding tasks? Any insights or advice would be greatly appreciated!
I paid for Claude pro because i've been hearing that people have used it to do insane things with coding, basically writing entire projects just with claude. I'm trying to use it to design a simple game in python. It's not super complicated, it's something I could write myself but it would take me quite a while as I'm not fast at coding. maybe my expectations were too high but based on what other people were saying I thought I could get claude to basically write the whole program for me with the right prompting.
But I don't really understand how people have used claude do build projects successfully at all. Its capability and understnad of code is quite impressive for an AI, it's certianly much smarter than chat gpt4o. But it seems to hit a wall super quickly if I send it my code and try to have it add new features. And whenever it gets stuck, if I explain to it the problem, its answer is always to add a bunch of extra redundant functions that "check" (unsuccesfully) for the issue if it arises, instead of actually trying to fix the bug.
additionally its code management seems atrocious so because I started the project using claude i'm nearly unable to start editing the code myself. the compartmentalization is terrible and there's tons of weird redundancies, unnused functions, unnecessary functions, and code in strange places.
i'm just wondering when people have made these projects using only Claude, how are you actually getting it to write code that you can put together into a large program? is there some organizational trick I'm missing?
I've been using Claude Code for the past two weekends and I'm absolutely blown away by what it can do! Over the last two weekends I've crushed through 230M tokens (about $140 worth of API credit) building some web applications. Personally, having tried Replit, Bolt, Loveable, Cursor and Windsurf, I feel like I enjoy using Claude Code a whole lot more.
Wanted to see how others feel about it? What do you like or don't like?
I am a senior dev of 10 years, and have been using claude code since it's beta release (started in December IIRC).
I have seen countless posts on here of people saying that the code they are getting is absolute garbage, having to rewrite everything, 20+ corrections, etc.
I have not had this happen once. And I am curious what the difference is between what I am doing and what they are doing. To give an example, I just recently finished 2 massive projects with claude code in days that would have previously taken months to do.
A C# Microservice api using minimal apis to handle a core document system at my company. CRUD as well as many workflow oriented APIs with full security and ACL implications, worked like a charm.
Refactoring an existing C# API (controller MVC based) to get rid of the mediatr package from within it and use direct dependency injection while maintaining interfaces between everythign for ease of testing. Again, flawless performance.
These are just 2 examples of the countless other projects im working on at the moment where they are also performing exceptionally.
I genuinely wonder what others are doing that I am not seeing, cause I want to be able to help, but I dont know what the problem is.
Thanks in advance for helping me understand!
Edit: Gonna summarize some of the things I'm reading here (on my own! Not with AI):
- Context is king!
- Garbage in, Garbage out
- If you don't know how to communicate, you aren't going to get good results.
- Statistical Bias, people who complain are louder than those who are having a good time.
- Less examples online == more often receiving bad code.
I dont see many people talk about it.
I recently got the max plan (just to test things out). Omfg this thing feels like a true Agent system and am totally changing the way I approach coding and just doing any digital things.
I gave it a narly project to do a BI workflow/data analytics project that I had been working on. It read through my spec, understood the data schema, ran more things by itself to understand more of the data, and outputted a python code that satisfied my spec. What took me a long ass time to do (ie copy pasting data to a webui, asking ai to understand the data and write the sql i want), now it just does it all by itself.
I hooked up Notion MCP and gave a DB of projects I want it to work on (i've written some high level specs), and it automatically went thru all of it and punched it out and updated the project status.
Its unreal. I feel like this is a true agentic program that can really run on its own and do things well.
How come no ones is talking about!??
I've been using Claude Code extensively since its release, and despite not being a coding expert, the results have been incredible. It's so effective that I've been able to handle bug fixes and development tasks that I previously outsourced to freelancers.
To put this in perspective: I recently posted a job on Upwork to rebuild my app (a straightforward CRUD application). The quotes I received started at $1,000 with a timeline of 1-2 weeks minimum. Instead, I decided to try Claude Code.
I provided it with my old codebase and backend API documentation. Within 2 hours of iterating and refining, I had a fully functional app with an excellent design. There were a few minor bugs, but they were quickly resolved. The final product matched or exceeded what I would have received from a freelancer. And the thing here is, I didn't even see the codebase. Just chatting.
It's not just this case, it's with many other things.
The economics are mind-blowing. For $200/month on the max plan, I have access to this capability. Previously, feature releases and fixes took weeks due to freelancer availability and turnaround times. Now I can implement new features in days, sometimes hours. When I have an idea, I can ship it within days (following proper release practices, of course).
This experience has me wondering about the future of programming and AI. The productivity gains are transformative, and I can't help but think about what the landscape will look like in the coming months as these tools continue to evolve. I imagine others have had similar experiences - if this technology disappeared overnight, the productivity loss would be staggering.
After years of relying on online QR generators, I finally decided to make my own. Asked Claude to help me build a Python script, and honestly, it turned out way better than expected.
What it does:
Generates QR codes (obviously ๐)
Saves them locally (no more sketchy online services)
Dark mode UI (because we're not savages)
Tracks usage with a counter
Shows history of generated QRs
Everything stays on your machine
The cool part? It's just a Flask app with a simple web interface. No need to install heavy software or trust random websites with your data.
Features I got for free:
Keeps track of how many QRs you've made (total and daily)
Shows preview of generated QRs instantly
Saves everything in the same folder
Mobile-friendly interface
Dark theme that doesn't burn your eyes at 3 AM
Tech stack:
Python (Flask)
Basic HTML/CSS
qrcode library
That's it!
Why it's better than online generators:
Privacy - everything stays on your machine
No ads or "premium" features
Works offline
No file size limits
Can customize it however you want
Seriously, if you're tired of those "free" online QR generators with their premium features and ads, just make your own. It took me 2 minutes with Claude to get something that does exactly what I need.
I have spent at least 1000+ hours with LLMs last year. At least 100+ creating programs with GPT4, creating agents, doing all kinds of little research projects.
Claude is blowing me away, especially for coding tasks. There is no way I can use GPT4 now. Here is why, Claude has a much bigger context window.
In Claude I can paste in 5 , 200 line .python files (this is enough for millions of useful small projects if you are creative). GPT 4 as soon as you try this will summarize one into a few lines, ignore the rest. The comparison is night and day.
Then for code generation, it is getting almost impossible even using tricks to get GPT4 to generate a lot of code without spending a lot of time coaxing it. Claude will gladly write code until it can't... every single time if I ask it for full code it will spit out 200 lines of code.
Now for the understanding, it's just mind blowing. I can give it a fairly complex adjustment to the code and it will one-shot it, almost every time. GPT4 will take away hours of time coaxing it in the right direction. The amount of complexity Claude is able to one-shit given this much context length is otherworldly. Really I can't underestimate it.
The full potential is orders of magnitude above current public facing GPT4. It's probably mostly about context length and the amount of compute a company is willing to give someone for $20 / month. I worry that Anthropic scales it may have similar tradeoffs but for now earlier users can gain alpha and really use it.
This is really a dream. I truly feel it encroaching on human-level intelligence. I would personally pay $1000 / month if it gave me even more context, and more agentive and proactive behavior. I predict these companies will eventually release these advanced plans that rival a human salary but grant you much more compute, because it will be worth it.
I keeps seeing posts about how much "value" people are getting out of the Max plan, but these posts rarely mention what they're doing and whether or not the code produced was actually useful for their project.
It feels like people are applying the "lines of code" value mentality, where a manager will determine who their best programmer is by lines of code or Github activity rather than based on results.
So, especially if you're one of the people burning through tokens, what are you building? What has Claude Code actually made for you? Has it solved problems you struggled with or simply run into different problems?
I think the "look at all the tokens I'm using" posts are only exciting to me if something is produced at the end, and that something is complex enough to require that amount of compute.
I had an extremely detailed claude.md and very detailed step by step instructions in a readme that I gave Claude Code for spinning up an EC2 instance on AWS, installing Mistral, and providing a basic UI for running queries.
Those of you saying you got Claude Code to create X,Y,Z app "in 15 minutes" are either outright lying, or you only asked it to create the HTML interface and zero back-end. Much less scripting for one-shot cloud deployment.
Edit:
Reading comprehension is hard I know.
a) This was an experiment
b) I was not asking for help on how to do this, please stop sliding into my DMs trying to sell me dev services
c) I wasn't expecting to do this "in 15 minutes", I was using this to highlight how insane those claims actually are
d) one-shot scripting for cloud infra was literally my job at Google for 2 years, and this exact script that Claude Code failed at completely is actually quite straightforward with Claude in Cursor (or writing manually), funny enough.