I review pretty much all of it. Everything in git, everything in PRs, review like code from any other programmer. If its going down a bad path or over-engineered out the ass, I skim for maybe 30 secs, close and delete the PR, and start a new session or redirect claude. If its workable I refine the approach, maybe refactor or change things myself, or guide claude in other ways (tests, guardrails, examples, better context, better claude.md and related docs, etc). If you let it run away like crazy and don't reign it, you are gonna have a bad time real quick. Assuming its software you want to use over time...if its just spike code to prototype out a UI or experiement or something, then ignore the code and go off the UI. When you have the UI you want, then you can get into the code. claude is great but can be like a smart, amnesiac know-it-all junior who types real fast. and is on speed. source: been programming for 30 yrs, big fan of skynet Answer from rsanheim on reddit.com
🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί claude code changed my life
r/ClaudeAI on Reddit: Claude Code changed my life
June 21, 2025 -

I've been using Claude Code extensively since its release, and despite not being a coding expert, the results have been incredible. It's so effective that I've been able to handle bug fixes and development tasks that I previously outsourced to freelancers.

To put this in perspective: I recently posted a job on Upwork to rebuild my app (a straightforward CRUD application). The quotes I received started at $1,000 with a timeline of 1-2 weeks minimum. Instead, I decided to try Claude Code.

I provided it with my old codebase and backend API documentation. Within 2 hours of iterating and refining, I had a fully functional app with an excellent design. There were a few minor bugs, but they were quickly resolved. The final product matched or exceeded what I would have received from a freelancer. And the thing here is, I didn't even see the codebase. Just chatting.

It's not just this case, it's with many other things.

The economics are mind-blowing. For $200/month on the max plan, I have access to this capability. Previously, feature releases and fixes took weeks due to freelancer availability and turnaround times. Now I can implement new features in days, sometimes hours. When I have an idea, I can ship it within days (following proper release practices, of course).

This experience has me wondering about the future of programming and AI. The productivity gains are transformative, and I can't help but think about what the landscape will look like in the coming months as these tools continue to evolve. I imagine others have had similar experiences - if this technology disappeared overnight, the productivity loss would be staggering.

Top answer
1 of 5
259
It's hard to explain to someone if they haven't been involved in the industry for a while. Now I get to run through 15 or 20 ancient and terrible private GitHub repos to the measure to make something out of something or dump it Every new idea that I put into a voice note or scratch Pad now has legs. Redid my old rickety website with fantastic new technology and every new idea I had I simply threw it into a voice note practically while it was working on it for a real time edit and update as I was looking on the second monitor via npm run dev. Old semi-impossible thoughts are now reality. Taking scraps of data into a project and building a Neo 4j graph with all the security and bells and whistles with a Next JS front end with correlation and schema and analysis... is a day. Maybe a weekend to really snaz it up. Structured workflows with mcp tools like playwright, Serena, Synk, sequential thinking, context 7, allows full cradle to grave Construction of a highly performant Enterprise product practically Within an eight hour working window. Parallel agent handoff is insane. A2A is a thing. Some of the newer Gemini models are quite good. An extensive line of business historically now has fresh awesomeness. Cybersecurity log analisys SIEM  and API Construction happens at the speed of thought. My RMM has an API with almost executive level permissions and the dashboard is little wacky so we rebuild the entire thing to run locally and runs better with better reporting. The computer forensics business relied on old Windows programs with bad block storage bad file analysis and slow and clunky database and painful UI. Now? Rebuilt the entire thing into CLI and process images much faster much more cleanly and I get the analysis and Reporting exactly the way I want simply for asking. Also analyzes file structure and determines encryption and timeline and what it is and how to present it in ways I could never do with the UI and even thinks of new things to put into it. Magic. I'm struggling to leave the house :) blew past Pro in a day and trying to stay on 5x.
2 of 5
87
What you may not realize is that Claude is most-likely making a mess of your code base. Maybe you'll get away with it, maybe it will burn you at some point in the future.
🌐
Reddit
reddit.com β€Ί r/chatgptcoding β€Ί claude is so good at coding its crazy!
r/ChatGPTCoding on Reddit: CLAUDE IS SO GOOD AT CODING ITS CRAZY!
June 4, 2025 -

I have been using Gemini 2.5 pro preview 05-06 and using the free credits because imma brokie and I have been having problems at coding that now matter what I do I can't solve and gets stuck so I ask Gemini to give me the problem of the summary paste it to Claude sonnet 4 chat and BOOM! it solves it in 1 go! And this happened already 3 times with no fail it's just makes me wish I can afford Claude but will just have to make do what I can afford for now. :)

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί how are you guys able to carefully review and test all the code that claude code generates?
r/ClaudeAI on Reddit: How are you guys able to carefully review and test all the code that Claude Code generates?
June 14, 2025 -

A lot of posts on here say they use Claude Code for hours a day. That's thousands of lines of code if not more. How are you able to review it all line by line and test it?

Which leads me to believe no one is reviewing it. And if true, how do you have secure, functioning bug free code without reviewing?

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί 3-month claude code max user review - considering alternatives
r/ClaudeAI on Reddit: 3-month Claude Code Max user review - considering alternatives
September 10, 2025 -

Hi everyone, I'm a developer who has been using Claude Code Max ($200 plan) for 3 months now. With renewal coming up on the 21st, I wanted to share my honest experience.

Initial Experience (First 1-2 months): I was genuinely impressed. Fast prototyping, reasonable code architecture, and great ability to understand requirements even with vague descriptions. It felt like a real productivity booster.

Recent Changes I've Noticed (Past 2-3 weeks):

  1. Performance degradation: Noticeable drop in code quality compared to earlier experience

  2. Unnecessary code generation: Frequently includes unused code that needs cleanup

  3. Excessive logging: Adds way too many log statements, cluttering the codebase

  4. Test quality issues: Generates superficial tests that don't provide meaningful validation

  5. Over-engineering: Tends to create overly complex solutions for simple requests

  6. Problem-solving capability: Struggles to effectively address persistent performance issues

  7. Reduced comprehension: Missing requirements even when described in detail

Current Situation: I'm now spending more time reviewing and fixing generated code than the actual generation saves me. It feels like constantly code-reviewing a junior developer's work rather than having a reliable coding partner.

Given the $200/month investment, I'm questioning the value proposition and currently exploring alternative tools.

Question for the community: Has anyone else experienced similar issues recently? Or are you still having a consistently good experience with Claude Code?

I'm genuinely curious if this is a temporary issue or if others are seeing similar patterns. If performance improves, I'd definitely consider coming back, but right now I'm not seeing the ROI that justified the subscription cost.

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί my experience with claude code
r/ClaudeAI on Reddit: My experience with Claude Code
March 25, 2025 -

Iβ€˜m a SWE with 15 years experience.

For the last few days I have been using Claude Code via an AWS enterprise subscription. I’ve been testing it on one of our internal Web Apps that has around 4K active employees using it. With a total api runtime of around 3h, I’ve spent around 350$ implementing 3 (smaller) feature requests with a total time of 12h (4days)

Normally I am running the Proxy AI Plugin for jetbrains or a combination of the Plugin with the Jetbrains MCP Server which is in my opinion the best out of both worlds. With this setup I would have spent around 10-30$ without being much slower.

Claude Code is a blackbox that is uncontrollable most of the time. Even if you try to guide it, its often easily distracted.

Don’t get me wrong, this tool is helpful if you don’t care about money. But spending 10$ where the AI is verifying what you already told it, by reading all files over and over again is way too expensive.

They have to implement either parallel tool calling or alternatives like tools via python code.

But 100$/h is not Enterprise ready if you still need to babysit it the whole time.

Top answer
1 of 5
30
I've had a very different experience -- although I recognise the issues you point out, especially when it runs away with itself and starts implementing something totally ridiculous. I've spent perhaps about 900-1000 USD and been able to construct a fully functional web application that we are now selling to customers (orders haven't been placed yet but they're incoming). I coded the original version of this back in the early 2000s and decided, as an experiment, to rearchitect everything from zero with Claude Code. I'd say the result has been simply brilliant. The first rough version was accessible for the team to start testing within about 20 minutes. I made some mistakes though. I got carried away and ended up telling it to do this-and-that. It never says no, of course, so I very quickly ended up with a super-over-engineered set of approaches. I actually had to roll those back! I have kept control of the fundamental architecture and approach myself. Quite a few times I've had to ask it to modify an existing function or class rather than simply add yet another one -- and that's probably one of the more frustrating aspects. Ask it to do something and it will. Occasionally it will do it the *best* way. Occasionally it will throw out some code and ... the function works. Right there in the browser. You click. You get the result. Buuuuuuut behind this, I then discover lots of extra empty or half used database tables and lots and lots of extra code that isn't necessary. This itself isn't a problem - because the thing *does* work. We're delighted. We're seeing complicated annoying features coming to life in literal minutes. It's when you want to modify things that it can get complicated. Because now you've got hundreds of functions to search, each doing ONE thing. So when Claude tries to modify that *single* function... sometimes it's fine... but sometimes it breaks another thing... and another... and before you know it, you've got chaos. So I'd suggest that the 'dream' isn't quite there -- that is, it being able to 'do everything'. But as I got to understand its capabilities, I began to give it point tasks. I took control of the higher level thinking. Now it's incredibly efficient for me -- and, it's costing me pennies or cents rather than dozens of dollars for every key update. I've learned to ask the right questions and issue the right commands. Hats off to the Anthropic team - I'm deeply impressed. But as the OP points out, it needs to be used in the most effective way or it can quickly burn through API credit.
2 of 5
16
The way you explained it is like Deep Research, it SEEMS impressive, but then you need to go back and verify all the shit is even half right.. ending up wasting the same amount of time
🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί software development became boring with claude code
r/ClaudeAI on Reddit: Software development became boring with Claude Code
1 day ago -

I never thought I'd say this but coding got boring.

Here's my workflow now:

  1. Write a prompt

  2. Wait 15-30 minutes

  3. Review the code

  4. Ask for fixes

  5. Repeat

That's it. That's the job now.

No more debugging for hours. No more Stack Overflow rabbit holes. No more "why the fuck isn't this working" at 2 AM.

Just... prompting. Reviewing. Prompting again.

Don't get me wrong β€” I'm 10x more productive. Shipping things that would've taken weeks in days. But the thrill is gone.

The struggle was the fun part. Figuring it out. That moment when it finally works after 4 hours of pain.

Now it just... works. First or second try.

I became a code reviewer for an AI. A project manager who occasionally writes prompts.

Anyone else feeling this? Or am I just romanticizing the suffering?

🌐
Reddit
reddit.com β€Ί r/chatgptcoding β€Ί claude code review
r/ChatGPTCoding on Reddit: Claude Code Review
March 3, 2025 -

I've been using Cursor for a while, but when Claude Code came out, I had to see if it was worth switching. I tested both on my open-source project, which has a React frontend and a Python backend.

Cursor did a better job with backend refactoring. It broke up my main.py into proper modules and handled imports and type checks without issues.

For frontend UI changes, both tools got the job done, but Cursor auto-linted the code, which was a nice touch.

When it came to full-stack changes, Claude Code actually performed better, requiring fewer iterations to get things right.

However, Cursor is $20 a month for unlimited edits, while Claude Code charges per change. I paid $4.69 for three simple edits, which could add up fast.

For now, I'm sticking with Cursor. Curious to hear what others think.

Full Breakdown of my analysis here: https://substack.com/home/post/p-158085301

Find elsewhere
🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί is everyone sleeping on claude code?
r/ClaudeAI on Reddit: is everyone sleeping on Claude Code?
May 13, 2025 -

I dont see many people talk about it.

I recently got the max plan (just to test things out). Omfg this thing feels like a true Agent system and am totally changing the way I approach coding and just doing any digital things.

I gave it a narly project to do a BI workflow/data analytics project that I had been working on. It read through my spec, understood the data schema, ran more things by itself to understand more of the data, and outputted a python code that satisfied my spec. What took me a long ass time to do (ie copy pasting data to a webui, asking ai to understand the data and write the sql i want), now it just does it all by itself.

I hooked up Notion MCP and gave a DB of projects I want it to work on (i've written some high level specs), and it automatically went thru all of it and punched it out and updated the project status.

Its unreal. I feel like this is a true agentic program that can really run on its own and do things well.

How come no ones is talking about!??

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί claude code is a beast – tips from 6 months of hardcore use
r/ClaudeAI on Reddit: Claude Code is a Beast – Tips from 6 Months of Hardcore Use
October 31, 2025 -

Quick pro-tip from a fellow lazy person: You can throw this book of a post into one of the many text-to-speech AI services like ElevenLabs Reader or Natural Reader and have it read the post for you :)

Edit: Many of you are asking for a repo so I will make an effort to get one up in the next couple days. All of this is a part of a work project at the moment, so I have to take some time to copy everything into a fresh project and scrub any identifying info. I will post the link here when it's up. You can also follow me and I will post it on my profile so you get notified. Thank you all for the kind comments. I'm happy to share this info with others since I don't get much chance to do so in my day-to-day.

Edit (final?): I bit the bullet and spent the afternoon getting a github repo up for you guys. Just made a post with some additional info here or you can go straight to the source:

🎯 Repository: https://github.com/diet103/claude-code-infrastructure-showcase

Disclaimer

I made a post about six months ago sharing my experience after a week of hardcore use with Claude Code. It's now been about six months of hardcore use, and I would like to share some more tips, tricks, and word vomit with you all. I may have went a little overboard here so strap in, grab a coffee, sit on the toilet or whatever it is you do when doom-scrolling reddit.

I want to start the post off with a disclaimer: all the content within this post is merely me sharing what setup is working best for me currently and should not be taken as gospel or the only correct way to do things. It's meant to hopefully inspire you to improve your setup and workflows with AI agentic coding. I'm just a guy, and this is just like, my opinion, man.

Also, I'm on the 20x Max plan, so your mileage may vary. And if you're looking for vibe-coding tips, you should look elsewhere. If you want the best out of CC, then you should be working together with it: planning, reviewing, iterating, exploring different approaches, etc.

Quick Overview

After 6 months of pushing Claude Code to its limits (solo rewriting 300k LOC), here's the system I built:

  • Skills that actually auto-activate when needed

  • Dev docs workflow that prevents Claude from losing the plot

  • PM2 + hooks for zero-errors-left-behind

  • Army of specialized agents for reviews, testing, and planning

Let's get into it.

Background

I'm a software engineer who has been working on production web apps for the last seven years or so. And I have fully embraced the wave of AI with open arms. I'm not too worried about AI taking my job anytime soon, as it is a tool that I use to leverage my capabilities. In doing so, I have been building MANY new features and coming up with all sorts of new proposal presentations put together with Claude and GPT-5 Thinking to integrate new AI systems into our production apps. Projects I would have never dreamt of having the time to even consider before integrating AI into my workflow. And with all that, I'm giving myself a good deal of job security and have become the AI guru at my job since everyone else is about a year or so behind on how they're integrating AI into their day-to-day.

With my newfound confidence, I proposed a pretty large redesign/refactor of one of our web apps used as an internal tool at work. This was a pretty rough college student-made project that was forked off another project developed by me as an intern (created about 7 years ago and forked 4 years ago). This may have been a bit overly ambitious of me since, to sell it to the stakeholders, I agreed to finish a top-down redesign of this fairly decent-sized project (~100k LOC) in a matter of a few months...all by myself. I knew going in that I was going to have to put in extra hours to get this done, even with the help of CC. But deep down, I know it's going to be a hit, automating several manual processes and saving a lot of time for a lot of people at the company.

It's now six months later... yeah, I probably should not have agreed to this timeline. I have tested the limits of both Claude as well as my own sanity trying to get this thing done. I completely scrapped the old frontend, as everything was seriously outdated and I wanted to play with the latest and greatest. I'm talkin' React 16 JS β†’ React 19 TypeScript, React Query v2 β†’ TanStack Query v5, React Router v4 w/ hashrouter β†’ TanStack Router w/ file-based routing, Material UI v4 β†’ MUI v7, all with strict adherence to best practices. The project is now at ~300-400k LOC and my life expectancy ~5 years shorter. It's finally ready to put up for testing, and I am incredibly happy with how things have turned out.

This used to be a project with insurmountable tech debt, ZERO test coverage, HORRIBLE developer experience (testing things was an absolute nightmare), and all sorts of jank going on. I addressed all of those issues with decent test coverage, manageable tech debt, and implemented a command-line tool for generating test data as well as a dev mode to test different features on the frontend. During this time, I have gotten to know CC's abilities and what to expect out of it.

A Note on Quality and Consistency

I've noticed a recurring theme in forums and discussions - people experiencing frustration with usage limits and concerns about output quality declining over time. I want to be clear up front: I'm not here to dismiss those experiences or claim it's simply a matter of "doing it wrong." Everyone's use cases and contexts are different, and valid concerns deserve to be heard.

That said, I want to share what's been working for me. In my experience, CC's output has actually improved significantly over the last couple of months, and I believe that's largely due to the workflow I've been constantly refining. My hope is that if you take even a small bit of inspiration from my system and integrate it into your CC workflow, you'll give it a better chance at producing quality output that you're happy with.

Now, let's be real - there are absolutely times when Claude completely misses the mark and produces suboptimal code. This can happen for various reasons. First, AI models are stochastic, meaning you can get widely varying outputs from the same input. Sometimes the randomness just doesn't go your way, and you get an output that's legitimately poor quality through no fault of your own. Other times, it's about how the prompt is structured. There can be significant differences in outputs given slightly different wording because the model takes things quite literally. If you misword or phrase something ambiguously, it can lead to vastly inferior results.

Sometimes You Just Need to Step In

Look, AI is incredible, but it's not magic. There are certain problems where pattern recognition and human intuition just win. If you've spent 30 minutes watching Claude struggle with something that you could fix in 2 minutes, just fix it yourself. No shame in that. Think of it like teaching someone to ride a bike, sometimes you just need to steady the handlebars for a second before letting go again.

I've seen this especially with logic puzzles or problems that require real-world common sense. AI can brute-force a lot of things, but sometimes a human just "gets it" faster. Don't let stubbornness or some misguided sense of "but the AI should do everything" waste your time. Step in, fix the issue, and keep moving.

I've had my fair share of terrible prompting, which usually happens towards the end of the day where I'm getting lazy and I'm not putting that much effort into my prompts. And the results really show. So next time you are having these kinds of issues where you think the output is way worse these days because you think Anthropic shadow-nerfed Claude, I encourage you to take a step back and reflect on how you are prompting.

Re-prompt often. You can hit double-esc to bring up your previous prompts and select one to branch from. You'd be amazed how often you can get way better results armed with the knowledge of what you don't want when giving the same prompt. All that to say, there can be many reasons why the output quality seems to be worse, and it's good to self-reflect and consider what you can do to give it the best possible chance to get the output you want.

As some wise dude somewhere probably said, "Ask not what Claude can do for you, ask what context you can give to Claude" ~ Wise Dude

Alright, I'm going to step down from my soapbox now and get on to the good stuff.

My System

I've implemented a lot changes to my workflow as it relates to CC over the last 6 months, and the results have been pretty great, IMO.

Skills Auto-Activation System (Game Changer!)

This one deserves its own section because it completely transformed how I work with Claude Code.

The Problem

So Anthropic releases this Skills feature, and I'm thinking "this looks awesome!" The idea of having these portable, reusable guidelines that Claude can reference sounded perfect for maintaining consistency across my massive codebase. I spent a good chunk of time with Claude writing up comprehensive skills for frontend development, backend development, database operations, workflow management, etc. We're talking thousands of lines of best practices, patterns, and examples.

And then... nothing. Claude just wouldn't use them. I'd literally use the exact keywords from the skill descriptions. Nothing. I'd work on files that should trigger the skills. Nothing. It was incredibly frustrating because I could see the potential, but the skills just sat there like expensive decorations.

The "Aha!" Moment

That's when I had the idea of using hooks. If Claude won't automatically use skills, what if I built a system that MAKES it check for relevant skills before doing anything?

So I dove into Claude Code's hook system and built a multi-layered auto-activation architecture with TypeScript hooks. And it actually works!

How It Works

I created two main hooks:

1. UserPromptSubmit Hook (runs BEFORE Claude sees your message):

  • Analyzes your prompt for keywords and intent patterns

  • Checks which skills might be relevant

  • Injects a formatted reminder into Claude's context

  • Now when I ask "how does the layout system work?" Claude sees a big "🎯 SKILL ACTIVATION CHECK - Use project-catalog-developer skill" (project catalog is a large complex data grid based feature on my front end) before even reading my question

2. Stop Event Hook (runs AFTER Claude finishes responding):

  • Analyzes which files were edited

  • Checks for risky patterns (try-catch blocks, database operations, async functions)

  • Displays a gentle self-check reminder

  • "Did you add error handling? Are Prisma operations using the repository pattern?"

  • Non-blocking, just keeps Claude aware without being annoying

skill-rules.json Configuration

I created a central configuration file that defines every skill with:

  • Keywords: Explicit topic matches ("layout", "workflow", "database")

  • Intent patterns: Regex to catch actions ("(create|add).*?(feature|route)")

  • File path triggers: Activates based on what file you're editing

  • Content triggers: Activates if file contains specific patterns (Prisma imports, controllers, etc.)

Example snippet:

{
  "backend-dev-guidelines": {
    "type": "domain",
    "enforcement": "suggest",
    "priority": "high",
    "promptTriggers": {
      "keywords": ["backend", "controller", "service", "API", "endpoint"],
      "intentPatterns": [
        "(create|add).*?(route|endpoint|controller)",
        "(how to|best practice).*?(backend|API)"
      ]
    },
    "fileTriggers": {
      "pathPatterns": ["backend/src/**/*.ts"],
      "contentPatterns": ["router\\.", "export.*Controller"]
    }
  }
}

The Results

Now when I work on backend code, Claude automatically:

  1. Sees the skill suggestion before reading my prompt

  2. Loads the relevant guidelines

  3. Actually follows the patterns consistently

  4. Self-checks at the end via gentle reminders

The difference is night and day. No more inconsistent code. No more "wait, Claude used the old pattern again." No more manually telling it to check the guidelines every single time.

Following Anthropic's Best Practices (The Hard Way)

After getting the auto-activation working, I dove deeper and found Anthropic's official best practices docs. Turns out I was doing it wrong because they recommend keeping the main SKILL.md file under 500 lines and using progressive disclosure with resource files.

Whoops. My frontend-dev-guidelines skill was 1,500+ lines. And I had a couple other skills over 1,000 lines. These monolithic files were defeating the whole purpose of skills (loading only what you need).

So I restructured everything:

  • frontend-dev-guidelines: 398-line main file + 10 resource files

  • backend-dev-guidelines: 304-line main file + 11 resource files

Now Claude loads the lightweight main file initially, and only pulls in detailed resource files when actually needed. Token efficiency improved 40-60% for most queries.

Skills I've Created

Here's my current skill lineup:

Guidelines & Best Practices:

  • backend-dev-guidelines - Routes β†’ Controllers β†’ Services β†’ Repositories

  • frontend-dev-guidelines - React 19, MUI v7, TanStack Query/Router patterns

  • skill-developer - Meta-skill for creating more skills

Domain-Specific:

  • workflow-developer - Complex workflow engine patterns

  • notification-developer - Email/notification system

  • database-verification - Prevent column name errors (this one is a guardrail that actually blocks edits!)

  • project-catalog-developer - DataGrid layout system

All of these automatically activate based on what I'm working on. It's like having a senior dev who actually remembers all the patterns looking over Claude's shoulder.

Why This Matters

Before skills + hooks:

  • Claude would use old patterns even though I documented new ones

  • Had to manually tell Claude to check BEST_PRACTICES.md every time

  • Inconsistent code across the 300k+ LOC codebase

  • Spent too much time fixing Claude's "creative interpretations"

After skills + hooks:

  • Consistent patterns automatically enforced

  • Claude self-corrects before I even see the code

  • Can trust that guidelines are being followed

  • Way less time spent on reviews and fixes

If you're working on a large codebase with established patterns, I cannot recommend this system enough. The initial setup took a couple of days to get right, but it's paid for itself ten times over.

CLAUDE.md and Documentation Evolution

In a post I wrote 6 months ago, I had a section about rules being your best friend, which I still stand by. But my CLAUDE.md file was quickly getting out of hand and was trying to do too much. I also had this massive BEST_PRACTICES.md file (1,400+ lines) that Claude would sometimes read and sometimes completely ignore.

So I took an afternoon with Claude to consolidate and reorganize everything into a new system. Here's what changed:

What Moved to Skills

Previously, BEST_PRACTICES.md contained:

  • TypeScript standards

  • React patterns (hooks, components, suspense)

  • Backend API patterns (routes, controllers, services)

  • Error handling (Sentry integration)

  • Database patterns (Prisma usage)

  • Testing guidelines

  • Performance optimization

All of that is now in skills with the auto-activation hook ensuring Claude actually uses them. No more hoping Claude remembers to check BEST_PRACTICES.md.

What Stayed in CLAUDE.md

Now CLAUDE.md is laser-focused on project-specific info (only ~200 lines):

  • Quick commands (pnpm pm2:start, pnpm build, etc.)

  • Service-specific configuration

  • Task management workflow (dev docs system)

  • Testing authenticated routes

  • Workflow dry-run mode

  • Browser tools configuration

The New Structure

Root CLAUDE.md (100 lines)
β”œβ”€β”€ Critical universal rules
β”œβ”€β”€ Points to repo-specific claude.md files
└── References skills for detailed guidelines

Each Repo's claude.md (50-100 lines)
β”œβ”€β”€ Quick Start section pointing to:
β”‚   β”œβ”€β”€ PROJECT_KNOWLEDGE.md - Architecture & integration
β”‚   β”œβ”€β”€ TROUBLESHOOTING.md - Common issues
β”‚   └── Auto-generated API docs
└── Repo-specific quirks and commands

The magic: Skills handle all the "how to write code" guidelines, and CLAUDE.md handles "how this specific project works." Separation of concerns for the win.

Dev Docs System

This system, out of everything (besides skills), I think has made the most impact on the results I'm getting out of CC. Claude is like an extremely confident junior dev with extreme amnesia, losing track of what they're doing easily. This system is aimed at solving those shortcomings.

The dev docs section from my CLAUDE.md:

### Starting Large Tasks

When exiting plan mode with an accepted plan: 1.**Create Task Directory**:
mkdir -p ~/git/project/dev/active/[task-name]/

2.**Create Documents**:

- `[task-name]-plan.md` - The accepted plan
- `[task-name]-context.md` - Key files, decisions
- `[task-name]-tasks.md` - Checklist of work

3.**Update Regularly**: Mark tasks complete immediately

### Continuing Tasks

- Check `/dev/active/` for existing tasks
- Read all three files before proceeding
- Update "Last Updated" timestamps

These are documents that always get created for every feature or large task. Before using this system, I had many times when I all of a sudden realized that Claude had lost the plot and we were no longer implementing what we had planned out 30 minutes earlier because we went off on some tangent for whatever reason.

My Planning Process

My process starts with planning. Planning is king. If you aren't at a minimum using planning mode before asking Claude to implement something, you're gonna have a bad time, mmm'kay. You wouldn't have a builder come to your house and start slapping on an addition without having him draw things up first.

When I start planning a feature, I put it into planning mode, even though I will eventually have Claude write the plan down in a markdown file. I'm not sure putting it into planning mode necessary, but to me, it feels like planning mode gets better results doing the research on your codebase and getting all the correct context to be able to put together a plan.

I created a strategic-plan-architect subagent that's basically a planning beast. It:

  • Gathers context efficiently

  • Analyzes project structure

  • Creates comprehensive structured plans with executive summary, phases, tasks, risks, success metrics, timelines

  • Generates three files automatically: plan, context, and tasks checklist

But I find it really annoying that you can't see the agent's output, and even more annoying is if you say no to the plan, it just kills the agent instead of continuing to plan. So I also created a custom slash command (/dev-docs) with the same prompt to use on the main CC instance.

Once Claude spits out that beautiful plan, I take time to review it thoroughly. This step is really important. Take time to understand it, and you'd be surprised at how often you catch silly mistakes or Claude misunderstanding a very vital part of the request or task.

More often than not, I'll be at 15% context left or less after exiting plan mode. But that's okay because we're going to put everything we need to start fresh into our dev docs. Claude usually likes to just jump in guns blazing, so I immediately slap the ESC key to interrupt and run my /dev-docs slash command. The command takes the approved plan and creates all three files, sometimes doing a bit more research to fill in gaps if there's enough context left.

And once I'm done with that, I'm pretty much set to have Claude fully implement the feature without getting lost or losing track of what it was doing, even through an auto-compaction. I just make sure to remind Claude every once in a while to update the tasks as well as the context file with any relevant context. And once I'm running low on context in the current session, I just run my slash command /update-dev-docs. Claude will note any relevant context (with next steps) as well as mark any completed tasks or add new tasks before I compact the conversation. And all I need to say is "continue" in the new session.

During implementation, depending on the size of the feature or task, I will specifically tell Claude to only implement one or two sections at a time. That way, I'm getting the chance to go in and review the code in between each set of tasks. And periodically, I have a subagent also reviewing the changes so I can catch big mistakes early on. If you aren't having Claude review its own code, then I highly recommend it because it saved me a lot of headaches catching critical errors, missing implementations, inconsistent code, and security flaws.

PM2 Process Management (Backend Debugging Game Changer)

This one's a relatively recent addition, but it's made debugging backend issues so much easier.

The Problem

My project has seven backend microservices running simultaneously. The issue was that Claude didn't have access to view the logs while services were running. I couldn't just ask "what's going wrong with the email service?" - Claude couldn't see the logs without me manually copying and pasting them into chat.

The Intermediate Solution

For a while, I had each service write its output to a timestamped log file using a devLog script. This worked... okay. Claude could read the log files, but it was clunky. Logs weren't real-time, services wouldn't auto-restart on crashes, and managing everything was a pain.

The Real Solution: PM2

Then I discovered PM2, and it was a game changer. I configured all my backend services to run via PM2 with a single command: pnpm pm2:start

What this gives me:

  • Each service runs as a managed process with its own log file

  • Claude can easily read individual service logs in real-time

  • Automatic restarts on crashes

  • Real-time monitoring with pm2 logs

  • Memory/CPU monitoring with pm2 monit

  • Easy service management (pm2 restart email, pm2 stop all, etc.)

PM2 Configuration:

// ecosystem.config.jsmodule.exports = {
  apps: [
    {
      name: 'form-service',
      script: 'npm',
      args: 'start',
      cwd: './form',
      error_file: './form/logs/error.log',
      out_file: './form/logs/out.log',
    },
// ... 6 more services
  ]
};

Before PM2:

Me: "The email service is throwing errors"
Me: [Manually finds and copies logs]
Me: [Pastes into chat]
Claude: "Let me analyze this..."

The debugging workflow now:

Me: "The email service is throwing errors"
Claude: [Runs] pm2 logs email --lines 200
Claude: [Reads the logs] "I see the issue - database connection timeout..."
Claude: [Runs] pm2 restart email
Claude: "Restarted the service, monitoring for errors..."

Night and day difference. Claude can autonomously debug issues now without me being a human log-fetching service.

One caveat: Hot reload doesn't work with PM2, so I still run the frontend separately with pnpm dev. But for backend services that don't need hot reload as often, PM2 is incredible.

Hooks System (#NoMessLeftBehind)

The project I'm working on is multi-root and has about eight different repos in the root project directory. One for the frontend and seven microservices and utilities for the backend. I'm constantly bouncing around making changes in a couple of repos at a time depending on the feature.

And one thing that would annoy me to no end is when Claude forgets to run the build command in whatever repo it's editing to catch errors. And it will just leave a dozen or so TypeScript errors without me catching it. Then a couple of hours later I see Claude running a build script like a good boy and I see the output: "There are several TypeScript errors, but they are unrelated, so we're all good here!"

No, we are not good, Claude.

Hook #1: File Edit Tracker

First, I created a post-tool-use hook that runs after every Edit/Write/MultiEdit operation. It logs:

  • Which files were edited

  • What repo they belong to

  • Timestamps

Initially, I made it run builds immediately after each edit, but that was stupidly inefficient. Claude makes edits that break things all the time before quickly fixing them.

Hook #2: Build Checker

Then I added a Stop hook that runs when Claude finishes responding. It:

  1. Reads the edit logs to find which repos were modified

  2. Runs build scripts on each affected repo

  3. Checks for TypeScript errors

  4. If < 5 errors: Shows them to Claude

  5. If β‰₯ 5 errors: Recommends launching auto-error-resolver agent

  6. Logs everything for debugging

Since implementing this system, I've not had a single instance where Claude has left errors in the code for me to find later. The hook catches them immediately, and Claude fixes them before moving on.

Hook #3: Prettier Formatter

This one's simple but effective. After Claude finishes responding, automatically format all edited files with Prettier using the appropriate .prettierrc config for that repo.

No more going into to manually edit a file just to have prettier run and produce 20 changes because Claude decided to leave off trailing commas last week when we created that file.

⚠️ Update: I No Longer Recommend This Hook

After publishing, a reader shared detailed data showing that file modifications trigger <system-reminder> notifications that can consume significant context tokens. In their case, Prettier formatting led to 160k tokens consumed in just 3 rounds due to system-reminders showing file diffs.

While the impact varies by project (large files and strict formatting rules are worst-case scenarios), I'm removing this hook from my setup. It's not a big deal to let formatting happen when you manually edit files anyway, and the potential token cost isn't worth the convenience.

If you want automatic formatting, consider running Prettier manually between sessions instead of during Claude conversations.

Hook #4: Error Handling Reminder

This is the gentle philosophy hook I mentioned earlier:

  • Analyzes edited files after Claude finishes

  • Detects risky patterns (try-catch, async operations, database calls, controllers)

  • Shows a gentle reminder if risky code was written

  • Claude self-assesses whether error handling is needed

  • No blocking, no friction, just awareness

Example output:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
πŸ“‹ ERROR HANDLING SELF-CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

⚠️  Backend Changes Detected
   2 file(s) edited

   ❓ Did you add Sentry.captureException() in catch blocks?
   ❓ Are Prisma operations wrapped in error handling?

   πŸ’‘ Backend Best Practice:
      - All errors should be captured to Sentry
      - Controllers should extend BaseController
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

The Complete Hook Pipeline

Here's what happens on every Claude response now:

Claude finishes responding
  ↓
Hook 1: Prettier formatter runs β†’ All edited files auto-formatted
  ↓
Hook 2: Build checker runs β†’ TypeScript errors caught immediately
  ↓
Hook 3: Error reminder runs β†’ Gentle self-check for error handling
  ↓
If errors found β†’ Claude sees them and fixes
  ↓
If too many errors β†’ Auto-error-resolver agent recommended
  ↓
Result: Clean, formatted, error-free code

And the UserPromptSubmit hook ensures Claude loads relevant skills BEFORE even starting work.

No mess left behind. It's beautiful.

Scripts Attached to Skills

One really cool pattern I picked up from Anthropic's official skill examples on GitHub: attach utility scripts to skills.

For example, my backend-dev-guidelines skill has a section about testing authenticated routes. Instead of just explaining how authentication works, the skill references an actual script:

### Testing Authenticated Routes

Use the provided test-auth-route.js script:


node scripts/test-auth-route.js http://localhost:3002/api/endpoint

The script handles all the complex authentication steps for you:

  1. Gets a refresh token from Keycloak

  2. Signs the token with JWT secret

  3. Creates cookie header

  4. Makes authenticated request

When Claude needs to test a route, it knows exactly what script to use and how to use it. No more "let me create a test script" and reinventing the wheel every time.

I'm planning to expand this pattern - attach more utility scripts to relevant skills so Claude has ready-to-use tools instead of generating them from scratch.

Tools and Other Things

SuperWhisper on Mac

Voice-to-text for prompting when my hands are tired from typing. Works surprisingly well, and Claude understands my rambling voice-to-text surprisingly well.

Memory MCP

I use this less over time now that skills handle most of the "remembering patterns" work. But it's still useful for tracking project-specific decisions and architectural choices that don't belong in skills.

BetterTouchTool

  • Relative URL copy from Cursor (for sharing code references)

    • I have VSCode open to more easily find the files I’m looking for and I can double tap CAPS-LOCK, then BTT inputs the shortcut to copy relative URL, transforms the clipboard contents by prepending an β€˜@’ symbol, focuses the terminal, and then pastes the file path. All in one.

  • Double-tap hotkeys to quickly focus apps (CMD+CMD = Claude Code, OPT+OPT = Browser)

  • Custom gestures for common actions

Honestly, the time savings on just not fumbling between apps is worth the BTT purchase alone.

Scripts for Everything

If there's any annoying tedious task, chances are there's a script for that:

  • Command-line tool to generate mock test data. Before using Claude code, it was extremely annoying to generate mock data because I would have to make a submission to a form that had about a 120 questions Just to generate one single test submission.

  • Authentication testing scripts (get tokens, test routes)

  • Database resetting and seeding

  • Schema diff checker before migrations

  • Automated backup and restore for dev database

Pro tip: When Claude helps you write a useful script, immediately document it in CLAUDE.md or attach it to a relevant skill. Future you will thank past you.

Documentation (Still Important, But Evolved)

I think next to planning, documentation is almost just as important. I document everything as I go in addition to the dev docs that are created for each task or feature. From system architecture to data flow diagrams to actual developer docs and APIs, just to name a few.

But here's what changed: Documentation now works WITH skills, not instead of them.

Skills contain: Reusable patterns, best practices, how-to guides Documentation contains: System architecture, data flows, API references, integration points

For example:

  • "How to create a controller" β†’ backend-dev-guidelines skill

  • "How our workflow engine works" β†’ Architecture documentation

  • "How to write React components" β†’ frontend-dev-guidelines skill

  • "How notifications flow through the system" β†’ Data flow diagram + notification skill

I still have a LOT of docs (850+ markdown files), but now they're laser-focused on project-specific architecture rather than repeating general best practices that are better served by skills.

You don't necessarily have to go that crazy, but I highly recommend setting up multiple levels of documentation. Ones for broad architectural overview of specific services, wherein you'll include paths to other documentation that goes into more specifics of different parts of the architecture. It will make a major difference on Claude's ability to easily navigate your codebase.

Prompt Tips

When you're writing out your prompt, you should try to be as specific as possible about what you are wanting as a result. Once again, you wouldn't ask a builder to come out and build you a new bathroom without at least discussing plans, right?

"You're absolutely right! Shag carpet probably is not the best idea to have in a bathroom."

Sometimes you might not know the specifics, and that's okay. If you don't ask questions, tell Claude to research and come back with several potential solutions. You could even use a specialized subagent or use any other AI chat interface to do your research. The world is your oyster. I promise you this will pay dividends because you will be able to look at the plan that Claude has produced and have a better idea if it's good, bad, or needs adjustments. Otherwise, you're just flying blind, pure vibe-coding. Then you're gonna end up in a situation where you don't even know what context to include because you don't know what files are related to the thing you're trying to fix.

Try not to lead in your prompts if you want honest, unbiased feedback. If you're unsure about something Claude did, ask about it in a neutral way instead of saying, "Is this good or bad?" Claude tends to tell you what it thinks you want to hear, so leading questions can skew the response. It's better to just describe the situation and ask for thoughts or alternatives. That way, you'll get a more balanced answer.

Agents, Hooks, and Slash Commands (The Holy Trinity)

Agents

I've built a small army of specialized agents:

Quality Control:

  • code-architecture-reviewer - Reviews code for best practices adherence

  • build-error-resolver - Systematically fixes TypeScript errors

  • refactor-planner - Creates comprehensive refactoring plans

Testing & Debugging:

  • auth-route-tester - Tests backend routes with authentication

  • auth-route-debugger - Debugs 401/403 errors and route issues

  • frontend-error-fixer - Diagnoses and fixes frontend errors

Planning & Strategy:

  • strategic-plan-architect - Creates detailed implementation plans

  • plan-reviewer - Reviews plans before implementation

  • documentation-architect - Creates/updates documentation

Specialized:

  • frontend-ux-designer - Fixes styling and UX issues

  • web-research-specialist - Researches issues along with many other things on the web

  • reactour-walkthrough-designer - Creates UI tours

The key with agents is to give them very specific roles and clear instructions on what to return. I learned this the hard way after creating agents that would go off and do who-knows-what and come back with "I fixed it!" without telling me what they fixed.

Hooks (Covered Above)

The hook system is honestly what ties everything together. Without hooks:

  • Skills sit unused

  • Errors slip through

  • Code is inconsistently formatted

  • No automatic quality checks

With hooks:

  • Skills auto-activate

  • Zero errors left behind

  • Automatic formatting

  • Quality awareness built-in

Slash Commands

I have quite a few custom slash commands, but these are the ones I use most:

Planning & Docs:

  • /dev-docs - Create comprehensive strategic plan

  • /dev-docs-update - Update dev docs before compaction

  • /create-dev-docs - Convert approved plan to dev doc files

Quality & Review:

  • /code-review - Architectural code review

  • /build-and-fix - Run builds and fix all errors

Testing:

  • /route-research-for-testing - Find affected routes and launch tests

  • /test-route - Test specific authenticated routes

The beauty of slash commands is they expand into full prompts, so you can pack a ton of context and instructions into a simple command. Way better than typing out the same instructions every time.

Conclusion

After six months of hardcore use, here's what I've learned:

The Essentials:

  1. Plan everything - Use planning mode or strategic-plan-architect

  2. Skills + Hooks - Auto-activation is the only way skills actually work reliably

  3. Dev docs system - Prevents Claude from losing the plot

  4. Code reviews - Have Claude review its own work

  5. PM2 for backend - Makes debugging actually bearable

The Nice-to-Haves:

  • Specialized agents for common tasks

  • Slash commands for repeated workflows

  • Comprehensive documentation

  • Utility scripts attached to skills

  • Memory MCP for decisions

And that's about all I can think of for now. Like I said, I'm just some guy, and I would love to hear tips and tricks from everybody else, as well as any criticisms. Because I'm always up for improving upon my workflow. I honestly just wanted to share what's working for me with other people since I don't really have anybody else to share this with IRL (my team is very small, and they are all very slow getting on the AI train).

If you made it this far, thanks for taking the time to read. If you have questions about any of this stuff or want more details on implementation, happy to share. The hooks and skills system especially took some trial and error to get right, but now that it's working, I can't imagine going back.

TL;DR: Built an auto-activation system for Claude Code skills using TypeScript hooks, created a dev docs workflow to prevent context loss, and implemented PM2 + automated error checking. Result: Solo rewrote 300k LOC in 6 months with consistent quality.

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί are people actually getting bad code from claude?
r/ClaudeAI on Reddit: Are people actually getting bad code from claude?
July 22, 2025 -

I am a senior dev of 10 years, and have been using claude code since it's beta release (started in December IIRC).

I have seen countless posts on here of people saying that the code they are getting is absolute garbage, having to rewrite everything, 20+ corrections, etc.

I have not had this happen once. And I am curious what the difference is between what I am doing and what they are doing. To give an example, I just recently finished 2 massive projects with claude code in days that would have previously taken months to do.

  1. A C# Microservice api using minimal apis to handle a core document system at my company. CRUD as well as many workflow oriented APIs with full security and ACL implications, worked like a charm.

  2. Refactoring an existing C# API (controller MVC based) to get rid of the mediatr package from within it and use direct dependency injection while maintaining interfaces between everythign for ease of testing. Again, flawless performance.

These are just 2 examples of the countless other projects im working on at the moment where they are also performing exceptionally.

I genuinely wonder what others are doing that I am not seeing, cause I want to be able to help, but I dont know what the problem is.

Thanks in advance for helping me understand!

Edit: Gonna summarize some of the things I'm reading here (on my own! Not with AI):

- Context is king!

- Garbage in, Garbage out

- If you don't know how to communicate, you aren't going to get good results.

- Statistical Bias, people who complain are louder than those who are having a good time.

- Less examples online == more often receiving bad code.

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί code quality of claude, a sad realization
r/ClaudeAI on Reddit: Code quality of Claude, a sad realization
1 week ago -

So about two weeks ago I read a prompt tip here somewhere. It's to be run on completion of a task/feature or such:

You wrote the code that currently is in git changes. Do a git diff and now pretend you're a senior dev doing a code review and you HATE this implementation. What would you criticize? What are the edge cases I'm not seeing?

I freaking hate this prompt. But, I also sorta love it. The problem is basically that since I started using it, it has become glaringly obvious that any first iteration of code written (using Claude 4.5 opus only) is ridden with absolutely critical flaws and huge bugs.

The prompt is obviously worded in such a way that it will always find something. You can likely run it 100 times, and it will keep finding stuff, that aren't actual problems. But I'm a software developer and have some decent understanding of what's a non issue, and what's actually somewhat major/critical. Most of the time, running it twice is enough. As long as you assert that the fix(es) are not overengineered and in themselves cause major issues.

But it's frustrating as heck. Take me back to the good old days when I was happily merging everything on the first try. Or well, actually, don't.

Not much of a point with this post. More so, try it out and have your eyes opened. Claude is absolutely fantastic. But the flaws... are often huge.

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί claude code is the best coding agent in the market and it's not close
r/ClaudeAI on Reddit: Claude Code is the best coding agent in the market and it's not close
1 month ago -

Claude Code just feels different. It's the only setup where the best coding model and the product are tightly integrated. "Taste" is thrown around a lot these days, but the UX here genuinely earns it: minimalist, surfaces just the right information at the right time, never overwhelms you.

Cursor can't match it because its harness bends around wildly different models, so even the same model doesn't perform as well there.

Gemini 3 Pro overthinks everything, and Gemini CLI is just a worse product. I'd bet far fewer Google engineers use it compared to Anthropic employees "antfooding" Claude Code.

Codex (GPT-5.1 Codex Max) is a powerful sledgehammer and amazing value at 20$ but too slow for real agentic loops where you need quick tool calls and tight back-and-forth. In my experience, it also gets stuck more often.

Claude Code with Opus 4.5 is the premium developer experience right now. As the makers of CC put it in this interview, you can tell it's built by people who use it every day and are laser focused on winning the "premium" developer market.

I haven't tried Opencode or Factory Droid yet though. Anyone else try them and prefer them to CC?

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί using claude code for prs and code reviews
r/ClaudeAI on Reddit: Using Claude code for PRs and code reviews
August 24, 2025 -

Has anyone used Claude code for PRs?

What was your expert like?

I want to try to use it not just to look for good programming practices, passing it my style guide and coding standards, but also have it look from a system level and review the architecture, make sure new code follows the architecture, make sure code matches documentation and meets requirements

It may not be possible, but I don't want to give it direct repo access

🌐
Reddit
reddit.com β€Ί r/claudecode β€Ί claude code is a beast – tips from 6 months of hardcore use
r/ClaudeCode on Reddit: Claude Code is a Beast – Tips from 6 Months of Hardcore Use
October 31, 2025 -

Edit: Many of you are asking for a repo so I will make an effort to get one up in the next couple days. All of this is a part of a work project at the moment, so I have to take some time to copy everything into a fresh project and scrub any identifying info. I will post the link here when it's up. You can also follow me and I will post it on my profile so you get notified. Thank you all for the kind comments. I'm happy to share this info with others since I don't get much chance to do so in my day-to-day.

Edit (final?): I bit the bullet and spent the afternoon getting a github repo up for you guys. Just made a post with some additional info here or you can go straight to the source:

🎯 Repository: https://github.com/diet103/claude-code-infrastructure-showcase

Quick tip from a fellow lazy person: You can throw this book of a post into one of the many text-to-speech AI services like ElevenLabs Reader or Natural Reader and have it read the post for you :)

Disclaimer

I made a post about six months ago sharing my experience after a week of hardcore use with Claude Code. It's now been about six months of hardcore use, and I would like to share some more tips, tricks, and word vomit with you all. I may have went a little overboard here so strap in, grab a coffee, sit on the toilet or whatever it is you do when doom-scrolling reddit.

I want to start the post off with a disclaimer: all the content within this post is merely me sharing what setup is working best for me currently and should not be taken as gospel or the only correct way to do things. It's meant to hopefully inspire you to improve your setup and workflows with AI agentic coding. I'm just a guy, and this is just like, my opinion, man.

Also, I'm on the 20x Max plan, so your mileage may vary. And if you're looking for vibe-coding tips, you should look elsewhere. If you want the best out of CC, then you should be working together with it: planning, reviewing, iterating, exploring different approaches, etc.

Quick Overview

After 6 months of pushing Claude Code to its limits (solo rewriting 300k LOC), here's the system I built:

  • Skills that actually auto-activate when needed

  • Dev docs workflow that prevents Claude from losing the plot

  • PM2 + hooks for zero-errors-left-behind

  • Army of specialized agents for reviews, testing, and planning Let's get into it.

Background

I'm a software engineer who has been working on production web apps for the last seven years or so. And I have fully embraced the wave of AI with open arms. I'm not too worried about AI taking my job anytime soon, as it is a tool that I use to leverage my capabilities. In doing so, I have been building MANY new features and coming up with all sorts of new proposal presentations put together with Claude and GPT-5 Thinking to integrate new AI systems into our production apps. Projects I would have never dreamt of having the time to even consider before integrating AI into my workflow. And with all that, I'm giving myself a good deal of job security and have become the AI guru at my job since everyone else is about a year or so behind on how they're integrating AI into their day-to-day.

With my newfound confidence, I proposed a pretty large redesign/refactor of one of our web apps used as an internal tool at work. This was a pretty rough college student-made project that was forked off another project developed by me as an intern (created about 7 years ago and forked 4 years ago). This may have been a bit overly ambitious of me since, to sell it to the stakeholders, I agreed to finish a top-down redesign of this fairly decent-sized project (~100k LOC) in a matter of two to three months...all by myself. I knew going in that I was going to have to put in extra hours to get this done, even with the help of CC. But deep down, I know it's going to be a hit, automating several manual processes and saving a lot of time for a lot of people at the company.

It's now six months later... yeah, I probably should not have agreed to this timeline. I have tested the limits of both Claude as well as my own sanity trying to get this thing done. I completely scrapped the old frontend, as everything was seriously outdated and I wanted to play with the latest and greatest. I'm talkin' React 16 JS β†’ React 19 TypeScript, React Query v2 β†’ TanStack Query v5, React Router v4 w/ hashrouter β†’ TanStack Router w/ file-based routing, Material UI v4 β†’ MUI v7, all with strict adherence to best practices. The project is now at ~300-400k LOC and my life expectancy ~5 years shorter. It's finally ready to put up for testing, and I am incredibly happy with how things have turned out.

This used to be a project with insurmountable tech debt, ZERO test coverage, HORRIBLE developer experience (testing things was an absolute nightmare), and all sorts of jank going on. I addressed all of those issues with decent test coverage, manageable tech debt, and implemented a command-line tool for generating test data as well as a dev mode to test different features on the frontend. During this time, I have gotten to know CC's abilities and what to expect out of it.

A Note on Quality and Consistency

I've noticed a recurring theme in forums and discussions - people experiencing frustration with usage limits and concerns about output quality declining over time. I want to be clear up front: I'm not here to dismiss those experiences or claim it's simply a matter of "doing it wrong." Everyone's use cases and contexts are different, and valid concerns deserve to be heard.

That said, I want to share what's been working for me. In my experience, CC's output has actually improved significantly over the last couple of months, and I believe that's largely due to the workflow I've been constantly refining. My hope is that if you take even a small bit of inspiration from my system and integrate it into your CC workflow, you'll give it a better chance at producing quality output that you're happy with.

Now, let's be real - there are absolutely times when Claude completely misses the mark and produces suboptimal code. This can happen for various reasons. First, AI models are stochastic, meaning you can get widely varying outputs from the same input. Sometimes the randomness just doesn't go your way, and you get an output that's legitimately poor quality through no fault of your own. Other times, it's about how the prompt is structured. There can be significant differences in outputs given slightly different wording because the model takes things quite literally. If you misword or phrase something ambiguously, it can lead to vastly inferior results.

Sometimes You Just Need to Step In

Look, AI is incredible, but it's not magic. There are certain problems where pattern recognition and human intuition just win. If you've spent 30 minutes watching Claude struggle with something that you could fix in 2 minutes, just fix it yourself. No shame in that. Think of it like teaching someone to ride a bike - sometimes you just need to steady the handlebars for a second before letting go again.

I've seen this especially with logic puzzles or problems that require real-world common sense. AI can brute-force a lot of things, but sometimes a human just "gets it" faster. Don't let stubbornness or some misguided sense of "but the AI should do everything" waste your time. Step in, fix the issue, and keep moving.

I've had my fair share of terrible prompting, which usually happens towards the end of the day where I'm getting lazy and I'm not putting that much effort into my prompts. And the results really show. So next time you are having these kinds of issues where you think the output is way worse these days because you think Anthropic shadow-nerfed Claude, I encourage you to take a step back and reflect on how you are prompting.

Re-prompt often. You can hit double-esc to bring up your previous prompts and select one to branch from. You'd be amazed how often you can get way better results armed with the knowledge of what you don't want when giving the same prompt. All that to say, there can be many reasons why the output quality seems to be worse, and it's good to self-reflect and consider what you can do to give it the best possible chance to get the output you want.

As some wise dude somewhere probably said, "Ask not what Claude can do for you, ask what context you can give to Claude" ~ Wise Dude

Alright, I'm going to step down from my soapbox now and get on to the good stuff.

My System

I've implemented a lot changes to my workflow as it relates to CC over the last 6 months, and the results have been pretty great, IMO.

Skills Auto-Activation System (Game Changer!)

This one deserves its own section because it completely transformed how I work with Claude Code.

The Problem

So Anthropic releases this Skills feature, and I'm thinking "this looks awesome!" The idea of having these portable, reusable guidelines that Claude can reference sounded perfect for maintaining consistency across my massive codebase. I spent a good chunk of time with Claude writing up comprehensive skills for frontend development, backend development, database operations, workflow management, etc. We're talking thousands of lines of best practices, patterns, and examples.

And then... nothing. Claude just wouldn't use them. I'd literally use the exact keywords from the skill descriptions. Nothing. I'd work on files that should trigger the skills. Nothing. It was incredibly frustrating because I could see the potential, but the skills just sat there like expensive decorations.

The "Aha!" Moment

That's when I had the idea of using hooks. If Claude won't automatically use skills, what if I built a system that MAKES it check for relevant skills before doing anything?

So I dove into Claude Code's hook system and built a multi-layered auto-activation architecture with TypeScript hooks. And it actually works!

How It Works

I created two main hooks:

1. UserPromptSubmit Hook (runs BEFORE Claude sees your message):

  • Analyzes your prompt for keywords and intent patterns

  • Checks which skills might be relevant

  • Injects a formatted reminder into Claude's context

  • Now when I ask "how does the layout system work?" Claude sees a big "🎯 SKILL ACTIVATION CHECK - Use project-catalog-developer skill" (project catalog is a large complex data grid based feature on my front end) before even reading my question

2. Stop Event Hook (runs AFTER Claude finishes responding):

  • Analyzes which files were edited

  • Checks for risky patterns (try-catch blocks, database operations, async functions)

  • Displays a gentle self-check reminder

  • "Did you add error handling? Are Prisma operations using the repository pattern?"

  • Non-blocking, just keeps Claude aware without being annoying

skill-rules.json Configuration

I created a central configuration file that defines every skill with:

  • Keywords: Explicit topic matches ("layout", "workflow", "database")

  • Intent patterns: Regex to catch actions ("(create|add).*?(feature|route)")

  • File path triggers: Activates based on what file you're editing

  • Content triggers: Activates if file contains specific patterns (Prisma imports, controllers, etc.)

Example snippet:

{
  "backend-dev-guidelines": {
    "type": "domain",
    "enforcement": "suggest",
    "priority": "high",
    "promptTriggers": {
      "keywords": ["backend", "controller", "service", "API", "endpoint"],
      "intentPatterns": [
        "(create|add).*?(route|endpoint|controller)",
        "(how to|best practice).*?(backend|API)"
      ]
    },
    "fileTriggers": {
      "pathPatterns": ["backend/src/**/*.ts"],
      "contentPatterns": ["router\\.", "export.*Controller"]
    }
  }
}

The Results

Now when I work on backend code, Claude automatically:

  1. Sees the skill suggestion before reading my prompt

  2. Loads the relevant guidelines

  3. Actually follows the patterns consistently

  4. Self-checks at the end via gentle reminders

The difference is night and day. No more inconsistent code. No more "wait, Claude used the old pattern again." No more manually telling it to check the guidelines every single time.

Following Anthropic's Best Practices (The Hard Way)

After getting the auto-activation working, I dove deeper and found Anthropic's official best practices docs. Turns out I was doing it wrong because they recommend keeping the main SKILL.md file under 500 lines and using progressive disclosure with resource files.

Whoops. My frontend-dev-guidelines skill was 1,500+ lines. And I had a couple other skills over 1,000 lines. These monolithic files were defeating the whole purpose of skills (loading only what you need).

So I restructured everything:

  • frontend-dev-guidelines: 398-line main file + 10 resource files

  • backend-dev-guidelines: 304-line main file + 11 resource files

Now Claude loads the lightweight main file initially, and only pulls in detailed resource files when actually needed. Token efficiency improved 40-60% for most queries.

Skills I've Created

Here's my current skill lineup:

Guidelines & Best Practices:

  • backend-dev-guidelines - Routes β†’ Controllers β†’ Services β†’ Repositories

  • frontend-dev-guidelines - React 19, MUI v7, TanStack Query/Router patterns

  • skill-developer - Meta-skill for creating more skills

Domain-Specific:

  • workflow-developer - Complex workflow engine patterns

  • notification-developer - Email/notification system

  • database-verification - Prevent column name errors (this one is a guardrail that actually blocks edits!)

  • project-catalog-developer - DataGrid layout system

All of these automatically activate based on what I'm working on. It's like having a senior dev who actually remembers all the patterns looking over Claude's shoulder.

Why This Matters

Before skills + hooks:

  • Claude would use old patterns even though I documented new ones

  • Had to manually tell Claude to check BEST_PRACTICES.md every time

  • Inconsistent code across the 300k+ LOC codebase

  • Spent too much time fixing Claude's "creative interpretations"

After skills + hooks:

  • Consistent patterns automatically enforced

  • Claude self-corrects before I even see the code

  • Can trust that guidelines are being followed

  • Way less time spent on reviews and fixes

If you're working on a large codebase with established patterns, I cannot recommend this system enough. The initial setup took a couple of days to get right, but it's paid for itself ten times over.

CLAUDE.md and Documentation Evolution

In a post I wrote 6 months ago, I had a section about rules being your best friend, which I still stand by. But my CLAUDE.md file was quickly getting out of hand and was trying to do too much. I also had this massive BEST_PRACTICES.md file (1,400+ lines) that Claude would sometimes read and sometimes completely ignore.

So I took an afternoon with Claude to consolidate and reorganize everything into a new system. Here's what changed:

What Moved to Skills

Previously, BEST_PRACTICES.md contained:

  • TypeScript standards

  • React patterns (hooks, components, suspense)

  • Backend API patterns (routes, controllers, services)

  • Error handling (Sentry integration)

  • Database patterns (Prisma usage)

  • Testing guidelines

  • Performance optimization

All of that is now in skills with the auto-activation hook ensuring Claude actually uses them. No more hoping Claude remembers to check BEST_PRACTICES.md.

What Stayed in CLAUDE.md

Now CLAUDE.md is laser-focused on project-specific info (only ~200 lines):

  • Quick commands (pnpm pm2:start, pnpm build, etc.)

  • Service-specific configuration

  • Task management workflow (dev docs system)

  • Testing authenticated routes

  • Workflow dry-run mode

  • Browser tools configuration

The New Structure

Root CLAUDE.md (100 lines)
β”œβ”€β”€ Critical universal rules
β”œβ”€β”€ Points to repo-specific claude.md files
└── References skills for detailed guidelines

Each Repo's claude.md (50-100 lines)
β”œβ”€β”€ Quick Start section pointing to:
β”‚   β”œβ”€β”€ PROJECT_KNOWLEDGE.md - Architecture & integration
β”‚   β”œβ”€β”€ TROUBLESHOOTING.md - Common issues
β”‚   └── Auto-generated API docs
└── Repo-specific quirks and commands

The magic: Skills handle all the "how to write code" guidelines, and CLAUDE.md handles "how this specific project works." Separation of concerns for the win.

Dev Docs System

This system, out of everything (besides skills), I think has made the most impact on the results I'm getting out of CC. Claude is like an extremely confident junior dev with extreme amnesia, losing track of what they're doing easily. This system is aimed at solving those shortcomings.

The dev docs section from my CLAUDE.md:

### Starting Large Tasks

When exiting plan mode with an accepted plan: 1.**Create Task Directory**:
mkdir -p ~/git/project/dev/active/[task-name]/

2.**Create Documents**:

- `[task-name]-plan.md` - The accepted plan
- `[task-name]-context.md` - Key files, decisions
- `[task-name]-tasks.md` - Checklist of work

3.**Update Regularly**: Mark tasks complete immediately

### Continuing Tasks

- Check `/dev/active/` for existing tasks
- Read all three files before proceeding
- Update "Last Updated" timestamps

These are documents that always get created for every feature or large task. Before using this system, I had many times when I all of a sudden realized that Claude had lost the plot and we were no longer implementing what we had planned out 30 minutes earlier because we went off on some tangent for whatever reason.

My Planning Process

My process starts with planning. Planning is king. If you aren't at a minimum using planning mode before asking Claude to implement something, you're gonna have a bad time, mmm'kay. You wouldn't have a builder come to your house and start slapping on an addition without having him draw things up first.

When I start planning a feature, I put it into planning mode, even though I will eventually have Claude write the plan down in a markdown file. I'm not sure putting it into planning mode necessary, but to me, it feels like planning mode gets better results doing the research on your codebase and getting all the correct context to be able to put together a plan.

I created a strategic-plan-architect subagent that's basically a planning beast. It:

  • Gathers context efficiently

  • Analyzes project structure

  • Creates comprehensive structured plans with executive summary, phases, tasks, risks, success metrics, timelines

  • Generates three files automatically: plan, context, and tasks checklist

But I find it really annoying that you can't see the agent's output, and even more annoying is if you say no to the plan, it just kills the agent instead of continuing to plan. So I also created a custom slash command (/dev-docs) with the same prompt to use on the main CC instance.

Once Claude spits out that beautiful plan, I take time to review it thoroughly. This step is really important. Take time to understand it, and you'd be surprised at how often you catch silly mistakes or Claude misunderstanding a very vital part of the request or task.

More often than not, I'll be at 15% context left or less after exiting plan mode. But that's okay because we're going to put everything we need to start fresh into our dev docs. Claude usually likes to just jump in guns blazing, so I immediately slap the ESC key to interrupt and run my /dev-docs slash command. The command takes the approved plan and creates all three files, sometimes doing a bit more research to fill in gaps if there's enough context left.

And once I'm done with that, I'm pretty much set to have Claude fully implement the feature without getting lost or losing track of what it was doing, even through an auto-compaction. I just make sure to remind Claude every once in a while to update the tasks as well as the context file with any relevant context. And once I'm running low on context in the current session, I just run my slash command /update-dev-docs. Claude will note any relevant context (with next steps) as well as mark any completed tasks or add new tasks before I compact the conversation. And all I need to say is "continue" in the new session.

During implementation, depending on the size of the feature or task, I will specifically tell Claude to only implement one or two sections at a time. That way, I'm getting the chance to go in and review the code in between each set of tasks. And periodically, I have a subagent also reviewing the changes so I can catch big mistakes early on. If you aren't having Claude review its own code, then I highly recommend it because it saved me a lot of headaches catching critical errors, missing implementations, inconsistent code, and security flaws.

PM2 Process Management (Backend Debugging Game Changer)

This one's a relatively recent addition, but it's made debugging backend issues so much easier.

The Problem

My project has seven backend microservices running simultaneously. The issue was that Claude didn't have access to view the logs while services were running. I couldn't just ask "what's going wrong with the email service?" - Claude couldn't see the logs without me manually copying and pasting them into chat.

The Intermediate Solution

For a while, I had each service write its output to a timestamped log file using a devLog script. This worked... okay. Claude could read the log files, but it was clunky. Logs weren't real-time, services wouldn't auto-restart on crashes, and managing everything was a pain.

The Real Solution: PM2

Then I discovered PM2, and it was a game changer. I configured all my backend services to run via PM2 with a single command: pnpm pm2:start

What this gives me:

  • Each service runs as a managed process with its own log file

  • Claude can easily read individual service logs in real-time

  • Automatic restarts on crashes

  • Real-time monitoring with pm2 logs

  • Memory/CPU monitoring with pm2 monit

  • Easy service management (pm2 restart email, pm2 stop all, etc.)

PM2 Configuration:

// ecosystem.config.jsmodule.exports = {
  apps: [
    {
      name: 'form-service',
      script: 'npm',
      args: 'start',
      cwd: './form',
      error_file: './form/logs/error.log',
      out_file: './form/logs/out.log',
    },
// ... 6 more services
  ]
};

Before PM2:

Me: "The email service is throwing errors"
Me: [Manually finds and copies logs]
Me: [Pastes into chat]
Claude: "Let me analyze this..."

The debugging workflow now:

Me: "The email service is throwing errors"
Claude: [Runs] pm2 logs email --lines 200
Claude: [Reads the logs] "I see the issue - database connection timeout..."
Claude: [Runs] pm2 restart email
Claude: "Restarted the service, monitoring for errors..."

Night and day difference. Claude can autonomously debug issues now without me being a human log-fetching service.

One caveat: Hot reload doesn't work with PM2, so I still run the frontend separately with pnpm dev. But for backend services that don't need hot reload as often, PM2 is incredible.

Hooks System (#NoMessLeftBehind)

The project I'm working on is multi-root and has about eight different repos in the root project directory. One for the frontend and seven microservices and utilities for the backend. I'm constantly bouncing around making changes in a couple of repos at a time depending on the feature.

And one thing that would annoy me to no end is when Claude forgets to run the build command in whatever repo it's editing to catch errors. And it will just leave a dozen or so TypeScript errors without me catching it. Then a couple of hours later I see Claude running a build script like a good boy and I see the output: "There are several TypeScript errors, but they are unrelated, so we're all good here!"

No, we are not good, Claude.

Hook #1: File Edit Tracker

First, I created a post-tool-use hook that runs after every Edit/Write/MultiEdit operation. It logs:

  • Which files were edited

  • What repo they belong to

  • Timestamps

Initially, I made it run builds immediately after each edit, but that was stupidly inefficient. Claude makes edits that break things all the time before quickly fixing them.

Hook #2: Build Checker

Then I added a Stop hook that runs when Claude finishes responding. It:

  1. Reads the edit logs to find which repos were modified

  2. Runs build scripts on each affected repo

  3. Checks for TypeScript errors

  4. If < 5 errors: Shows them to Claude

  5. If β‰₯ 5 errors: Recommends launching auto-error-resolver agent

  6. Logs everything for debugging

Since implementing this system, I've not had a single instance where Claude has left errors in the code for me to find later. The hook catches them immediately, and Claude fixes them before moving on.

Hook #3: Prettier Formatter

This one's simple but effective. After Claude finishes responding, automatically format all edited files with Prettier using the appropriate .prettierrc config for that repo.

No more going into to manually edit a file just to have prettier run and produce 20 changes because Claude decided to leave off trailing commas last week when we created that file.

⚠️ Update: I No Longer Recommend This Hook

After publishing, a reader shared detailed data showing that file modifications trigger <system-reminder> notifications that can consume significant context tokens. In their case, Prettier formatting led to 160k tokens consumed in just 3 rounds due to system-reminders showing file diffs.

While the impact varies by project (large files and strict formatting rules are worst-case scenarios), I'm removing this hook from my setup. It's not a big deal to let formatting happen when you manually edit files anyway, and the potential token cost isn't worth the convenience.

If you want automatic formatting, consider running Prettier manually between sessions instead of during Claude conversations.

Hook #4: Error Handling Reminder

This is the gentle philosophy hook I mentioned earlier:

  • Analyzes edited files after Claude finishes

  • Detects risky patterns (try-catch, async operations, database calls, controllers)

  • Shows a gentle reminder if risky code was written

  • Claude self-assesses whether error handling is needed

  • No blocking, no friction, just awareness

Example output:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
πŸ“‹ ERROR HANDLING SELF-CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

⚠️  Backend Changes Detected
   2 file(s) edited

   ❓ Did you add Sentry.captureException() in catch blocks?
   ❓ Are Prisma operations wrapped in error handling?

   πŸ’‘ Backend Best Practice:
      - All errors should be captured to Sentry
      - Controllers should extend BaseController
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

The Complete Hook Pipeline

Here's what happens on every Claude response now:

Claude finishes responding
  ↓
Hook 1: Prettier formatter runs β†’ All edited files auto-formatted
  ↓
Hook 2: Build checker runs β†’ TypeScript errors caught immediately
  ↓
Hook 3: Error reminder runs β†’ Gentle self-check for error handling
  ↓
If errors found β†’ Claude sees them and fixes
  ↓
If too many errors β†’ Auto-error-resolver agent recommended
  ↓
Result: Clean, formatted, error-free code

And the UserPromptSubmit hook ensures Claude loads relevant skills BEFORE even starting work.

No mess left behind. It's beautiful.

Scripts Attached to Skills

One really cool pattern I picked up from Anthropic's official skill examples on GitHub: attach utility scripts to skills.

For example, my backend-dev-guidelines skill has a section about testing authenticated routes. Instead of just explaining how authentication works, the skill references an actual script:

### Testing Authenticated Routes

Use the provided test-auth-route.js script:


`node scripts/test-auth-route.js http://localhost:3002/api/endpoint`

The script handles all the complex authentication steps for you:

  1. Gets a refresh token from Keycloak

  2. Signs the token with JWT secret

  3. Creates cookie header

  4. Makes authenticated request

When Claude needs to test a route, it knows exactly what script to use and how to use it. No more "let me create a test script" and reinventing the wheel every time.

I'm planning to expand this pattern - attach more utility scripts to relevant skills so Claude has ready-to-use tools instead of generating them from scratch.

Tools and Other Things

SuperWhisper on Mac

Voice-to-text for prompting when my hands are tired from typing. Works surprisingly well, and Claude understands my rambling voice-to-text surprisingly well.

Memory MCP

I use this less over time now that skills handle most of the "remembering patterns" work. But it's still useful for tracking project-specific decisions and architectural choices that don't belong in skills.

BetterTouchTool

  • Relative URL copy from Cursor (for sharing code references)

    • I have VSCode open to more easily find the files I’m looking for and I can double tap CAPS-LOCK, then BTT inputs the shortcut to copy relative URL, transforms the clipboard contents by prepending an β€˜@’ symbol, focuses the terminal, and then pastes the file path. All in one.

  • Double-tap hotkeys to quickly focus apps (CMD+CMD = Claude Code, OPT+OPT = Browser)

  • Custom gestures for common actions

Honestly, the time savings on just not fumbling between apps is worth the BTT purchase alone.

Scripts for Everything

If there's any annoying tedious task, chances are there's a script for that:

  • Command-line tool to generate mock test data. Before using Claude code, it was extremely annoying to generate mock data because I would have to make a submission to a form that had about a 120 questions Just to generate one single test submission.

  • Authentication testing scripts (get tokens, test routes)

  • Database resetting and seeding

  • Schema diff checker before migrations

  • Automated backup and restore for dev database

Pro tip: When Claude helps you write a useful script, immediately document it in CLAUDE.md or attach it to a relevant skill. Future you will thank past you.

Documentation (Still Important, But Evolved)

I think next to planning, documentation is almost just as important. I document everything as I go in addition to the dev docs that are created for each task or feature. From system architecture to data flow diagrams to actual developer docs and APIs, just to name a few.

But here's what changed: Documentation now works WITH skills, not instead of them.

Skills contain: Reusable patterns, best practices, how-to guides Documentation contains: System architecture, data flows, API references, integration points

For example:

  • "How to create a controller" β†’ backend-dev-guidelines skill

  • "How our workflow engine works" β†’ Architecture documentation

  • "How to write React components" β†’ frontend-dev-guidelines skill

  • "How notifications flow through the system" β†’ Data flow diagram + notification skill

I still have a LOT of docs (850+ markdown files), but now they're laser-focused on project-specific architecture rather than repeating general best practices that are better served by skills.

You don't necessarily have to go that crazy, but I highly recommend setting up multiple levels of documentation. Ones for broad architectural overview of specific services, wherein you'll include paths to other documentation that goes into more specifics of different parts of the architecture. It will make a major difference on Claude's ability to easily navigate your codebase.

Prompt Tips

When you're writing out your prompt, you should try to be as specific as possible about what you are wanting as a result. Once again, you wouldn't ask a builder to come out and build you a new bathroom without at least discussing plans, right?

"You're absolutely right! Shag carpet probably is not the best idea to have in a bathroom."

Sometimes you might not know the specifics, and that's okay. If you don't ask questions, tell Claude to research and come back with several potential solutions. You could even use a specialized subagent or use any other AI chat interface to do your research. The world is your oyster. I promise you this will pay dividends because you will be able to look at the plan that Claude has produced and have a better idea if it's good, bad, or needs adjustments. Otherwise, you're just flying blind, pure vibe-coding. Then you're gonna end up in a situation where you don't even know what context to include because you don't know what files are related to the thing you're trying to fix.

Try not to lead in your prompts if you want honest, unbiased feedback. If you're unsure about something Claude did, ask about it in a neutral way instead of saying, "Is this good or bad?" Claude tends to tell you what it thinks you want to hear, so leading questions can skew the response. It's better to just describe the situation and ask for thoughts or alternatives. That way, you'll get a more balanced answer.

Agents, Hooks, and Slash Commands (The Holy Trinity)

Agents

I've built a small army of specialized agents:

Quality Control:

  • code-architecture-reviewer - Reviews code for best practices adherence

  • build-error-resolver - Systematically fixes TypeScript errors

  • refactor-planner - Creates comprehensive refactoring plans

Testing & Debugging:

  • auth-route-tester - Tests backend routes with authentication

  • auth-route-debugger - Debugs 401/403 errors and route issues

  • frontend-error-fixer - Diagnoses and fixes frontend errors

Planning & Strategy:

  • strategic-plan-architect - Creates detailed implementation plans

  • plan-reviewer - Reviews plans before implementation

  • documentation-architect - Creates/updates documentation

Specialized:

  • frontend-ux-designer - Fixes styling and UX issues

  • web-research-specialist - Researches issues along with many other things on the web

  • reactour-walkthrough-designer - Creates UI tours

The key with agents is to give them very specific roles and clear instructions on what to return. I learned this the hard way after creating agents that would go off and do who-knows-what and come back with "I fixed it!" without telling me what they fixed.

Hooks (Covered Above)

The hook system is honestly what ties everything together. Without hooks:

  • Skills sit unused

  • Errors slip through

  • Code is inconsistently formatted

  • No automatic quality checks

With hooks:

  • Skills auto-activate

  • Zero errors left behind

  • Automatic formatting

  • Quality awareness built-in

Slash Commands

I have quite a few custom slash commands, but these are the ones I use most:

Planning & Docs:

  • /dev-docs - Create comprehensive strategic plan

  • /dev-docs-update - Update dev docs before compaction

  • /create-dev-docs - Convert approved plan to dev doc files

Quality & Review:

  • /code-review - Architectural code review

  • /build-and-fix - Run builds and fix all errors

Testing:

  • /route-research-for-testing - Find affected routes and launch tests

  • /test-route - Test specific authenticated routes

The beauty of slash commands is they expand into full prompts, so you can pack a ton of context and instructions into a simple command. Way better than typing out the same instructions every time.

Conclusion

After six months of hardcore use, here's what I've learned:

The Essentials:

  1. Plan everything - Use planning mode or strategic-plan-architect

  2. Skills + Hooks - Auto-activation is the only way skills actually work reliably

  3. Dev docs system - Prevents Claude from losing the plot

  4. Code reviews - Have Claude review its own work

  5. PM2 for backend - Makes debugging actually bearable

The Nice-to-Haves:

  • Specialized agents for common tasks

  • Slash commands for repeated workflows

  • Comprehensive documentation

  • Utility scripts attached to skills

  • Memory MCP for decisions

And that's about all I can think of for now. Like I said, I'm just some guy, and I would love to hear tips and tricks from everybody else, as well as any criticisms. Because I'm always up for improving upon my workflow. I honestly just wanted to share what's working for me with other people since I don't really have anybody else to share this with IRL (my team is very small, and they are all very slow getting on the AI train).

If you made it this far, thanks for taking the time to read. If you have questions about any of this stuff or want more details on implementation, happy to share. The hooks and skills system especially took some trial and error to get right, but now that it's working, I can't imagine going back.

TL;DR: Built an auto-activation system for Claude Code skills using TypeScript hooks, created a dev docs workflow to prevent context loss, and implemented PM2 + automated error checking. Result: Solo rewrote 300k LOC in 6 months with consistent quality.

🌐
Reddit
reddit.com β€Ί r/claudeai β€Ί tips/advice for using claude code with pr reviews?
r/ClaudeAI on Reddit: Tips/advice for using Claude Code with PR reviews?
July 6, 2025 -

One of the area's I'm trying to improve is using LLMs to improve my code reviews. So far, the best I've gotten to is having Claude Code run on the PR branch and analyze the changes compared to `main`. This has been helpful so far, but I know its a minimal first step.

I'm wondering what others are doing in this regards and if they have anything they found that made CC give more valuable feedback.

🌐
Reddit
reddit.com β€Ί r/claudecode β€Ί claude-codesentinel v2.0: advanced code review framework for claude code
r/ClaudeCode on Reddit: Claude-CodeSentinel v2.0: Advanced Code Review Framework for Claude Code
October 20, 2025 - I've been developing Claude-CodeSentinel v2.0, an open-source framework built specifically for Claude Code that performs comprehensive code reviews using Chain-of-Thought analysis.