The primary purpose of agents is to save token space in your main context by offloading work and only retaining the relevant output in the context rather than the entire conversation history. They also allow for parallelization and choosing specific models for specific tasks, but the biggest thing is the context encapsulation. The key things to know about agents: - Agents only receive the prompt that the outer context chooses to give them. They do NOT receive the full conversation prior to the agent being called. This means their responsibility needs to be well-defined and the main context should have a clear understanding of what to give them. This is especially relevant for agents that you want to write code for you, as code often requires a lot of context to write correctly. I personally only have agents write very simple code or make sure that they can get all the context they need by passing in pre-prepared plans in markdown files. - The main context only sees the last message of the agent in its context stream. This is usually the final summary report from its task, but sometimes it can do things like update a TodoWrite as a final step and this messes up what the outer context sees. My recommendations, based on my experience working with them: - Agents are _great_ for data gathering and consolidation (i.e. read-only tasks). I have a standard agent I use any time I want to gather context for a complicated task and this has helped a lot with removing all the codebase exploration from the working context of the main Claude. - Agents are also great for wrapping tool calls that generate a lot of output, like building and unit tests. I have a standard build-test-engineer I use whose only job is run build/test, then consolidate the output to just what's relevant to the main Claude. I've found this has substantially improved performance during extended debugging of its own changes, as it keeps the actual work closer in context so that it doesn't get stuck trying to hack around bugs without a good memory of why it's trying to do that in the first place. - To get the best results, use slash commands to automate requesting it to explicitly use specific agents. It's not always great on deciding to use agents on its own. - I also include some explicit instructions for agent use cases in the `CLAUDE.md`. So far I've gotten Claude to reliably use `build-test-engineer` and I also have it using `batch-editor` reasonably often (this is my agent for applying simple edits across a bunch of files, like for refactoring/cleanup tasks.) Answer from Jarlyk on reddit.com
🌐
Reddit
reddit.com › r/claudeai › claude code plan mode.
r/ClaudeAI on Reddit: Claude Code PLAN mode.
February 18, 2025 -

Maybe you miss it:

Plan mode is a special operating mode in Claude Code that allows you to research, analyze, and create implementation plans without making any actual changes to your system or codebase.

What Plan Mode Does:

Research & Analysis Only:

  • Read files and examine code

  • Search through codebases

  • Analyze project structure

  • Gather information from web sources

  • Review documentation

No System Changes:

  • Cannot edit files

  • Cannot run bash commands that modify anything

  • Cannot create/delete files

  • Cannot make git commits

  • Cannot install packages or change configurations

When Plan Mode Activates:

Plan mode is typically activated when:

  • You ask for planning or analysis before implementation

  • You want to understand a codebase before making changes

  • You request a detailed implementation strategy

  • The system detects you want to plan before executing

How It Works:

  1. Research Phase: I gather all necessary information using read-only tools

  2. Plan Creation: I develop a comprehensive implementation plan

  3. Plan Presentation: I use the exit_plan_mode tool to present the plan

  4. User Approval: You review and approve the plan

  5. Execution Phase: After approval, I can proceed with actual implementation

Benefits:

  • Safety: Prevents accidental changes during exploration

  • Thorough Planning: Ensures comprehensive analysis before implementation

  • User Control: You approve exactly what will be done before it happens

  • Better Outcomes: Well-planned implementations tend to be more successful

🌐
Reddit
reddit.com › r/claudeai › the planning mode is really good (claude code)
r/ClaudeAI on Reddit: The planning mode is really good (Claude Code)
July 1, 2025 -

I've been using the planning mode for a while now. It's actually very very good. I now use it almost exclusively when I start working on a new feature.

Here's my workflow:

  • Shift + Tab twice to enter the planning mode

  • Brainstorming the implementation with Claude, provide feedback on the solution, iterate until I am happy with the solution.

  • I use @ reference to help Claude with additional context so it doesn't spend a lot time exploring

  • For convenience, I also connect CC to VS Code by using the `/ide` slash command. I open a file in VS, select the lines, and ask CC about the lines.

  • I iterate with Claude until I am happy with the solution. After that, Shift + Tab twice to enter auto edit mode. CC will complete the implementation with very little intervention.

I find that with this approach, I don't even need to create PLAN.md anymore. I try to keep the feature iterations small, and commit the changes as soon as the code is working.

Do you have similar experience?


Addendum:

To use the /IDE command, see https://docs.anthropic.com/en/docs/claude-code/ide-integrations

https://cuong.io/blog/2025/06/23-claude-code-ide-vs-code


The key for this to be effective is to keep the scope small. Plan what you will do in the next 30 minutes or less.

The workflow It should be

plan > code > debug > commit

plan > code > debug > commit

plan > code > debug > commit

...

This works really well with small and incremental changes.

Pro tip: while waiting for Claude, you can open another terminal and start another Claude. You can have multiple planning sessions at the same time.


For long discussions, you may use the normal mode and just Claude not to make any changes.

Better yet, use the repomix cli to create a dump of your project.

https://github.com/yamadashy/repomix

You then can upload it to ChatGPT or Claude Web UI for long discussions. Chatgpt's project + canvas feature is super neat for this kind of long planning.

🌐
Reddit
reddit.com › r/claudecode › claude code /agents mode - could someone explain how it works and what are your experiences?
r/ClaudeCode on Reddit: Claude code /agents mode - could someone explain how it works and what are your experiences?
August 19, 2025 -

Hi,
could someone explain how the /agents mode in Claude Code actually works? I’m wondering if it’s more of a coding sandbox step-by-step, or closer to autonomous agents handling tasks. What are your experiences with using this mode?

Top answer
1 of 2
6
The primary purpose of agents is to save token space in your main context by offloading work and only retaining the relevant output in the context rather than the entire conversation history. They also allow for parallelization and choosing specific models for specific tasks, but the biggest thing is the context encapsulation. The key things to know about agents: - Agents only receive the prompt that the outer context chooses to give them. They do NOT receive the full conversation prior to the agent being called. This means their responsibility needs to be well-defined and the main context should have a clear understanding of what to give them. This is especially relevant for agents that you want to write code for you, as code often requires a lot of context to write correctly. I personally only have agents write very simple code or make sure that they can get all the context they need by passing in pre-prepared plans in markdown files. - The main context only sees the last message of the agent in its context stream. This is usually the final summary report from its task, but sometimes it can do things like update a TodoWrite as a final step and this messes up what the outer context sees. My recommendations, based on my experience working with them: - Agents are _great_ for data gathering and consolidation (i.e. read-only tasks). I have a standard agent I use any time I want to gather context for a complicated task and this has helped a lot with removing all the codebase exploration from the working context of the main Claude. - Agents are also great for wrapping tool calls that generate a lot of output, like building and unit tests. I have a standard build-test-engineer I use whose only job is run build/test, then consolidate the output to just what's relevant to the main Claude. I've found this has substantially improved performance during extended debugging of its own changes, as it keeps the actual work closer in context so that it doesn't get stuck trying to hack around bugs without a good memory of why it's trying to do that in the first place. - To get the best results, use slash commands to automate requesting it to explicitly use specific agents. It's not always great on deciding to use agents on its own. - I also include some explicit instructions for agent use cases in the `CLAUDE.md`. So far I've gotten Claude to reliably use `build-test-engineer` and I also have it using `batch-editor` reasonably often (this is my agent for applying simple edits across a bunch of files, like for refactoring/cleanup tasks.)
2 of 2
1
Use /agents Use claude recommended method to create an agent Then @agent and give it a task It basically spins up a new instance in background and works through the task, fresh context, orchestrated by claude
🌐
Reddit
reddit.com › r/claudeai › how do you actually use claude code in your day-to-day workflow? i’ll start:
r/ClaudeAI on Reddit: How do you actually use Claude Code in your day-to-day workflow? I’ll start:
2 weeks ago -

Share your methods and tips!

After a few months using Claude Code, I ended up developing a somewhat different workflow that’s been working really well. The basic idea is to use two separate Claudes - one to think and another to execute.

Here’s how it works:

Claude Desktop App: acts as supervisor. It reads all project documentation, analyzes logs when there’s a bug, investigates the code and creates very specific prompts describing what needs to be done. But it never modifies anything directly.

Claude Code CLI in VS Code: receives these prompts and does the implementations. Has full access to the project and executes the code changes.

My role: is basically copying prompts from one Claude to the other, running tests and reporting what happened.

The flow in practice goes something like this: I start the session having Claude Desktop read the CLAUDE.md (complete documentation) and the database schema. When I have a bug or new feature, I describe it to Claude Desktop. It investigates, reads the relevant files and creates a surgical prompt. I copy that prompt to Claude Code which implements it. Then Claude Desktop validates by reading each modified file - it checks security, performance, whether it followed project standards, etc. If there’s an error in tests, Claude Desktop analyzes the logs and generates a new correction prompt.

What makes this viable: I had to create some automations because Claude Code doesn’t have native access to certain things:

  1. CLAUDE.md - Maintains complete project documentation. I have a script that automatically updates this file whenever I modify code. This way Claude Desktop always has the current context.

  2. EstruturaBanco.txt - Since Claude Code doesn’t access the database directly, this file has the entire structure: tables, columns, relationships. Also has an update script I run when I change the schema.

  3. Log System - Claude CLI and Code Desktop don’t see terminal logs, so I created two .log files (one for frontend, another for backend) that automatically record only the last execution. Avoids accumulating gigabytes of logs and Claude Desktop can read them when it needs to investigate errors.

Important: I always use Claude Code Desktop in the project’s LOCAL folder, never in the GitHub repository. Learned this the hard way - GitHub’s cache/snapshot doesn’t pick up the latest Claude CLI updates, so it becomes impossible to verify what was recently created or fixed.

About the prompts: I use XML tags to better structure the instructions like: <role>, <project_context>, <workflow_architecture>, <tools_policy>, <investigation_protocol>, <quality_expectations>. Really helps maintain consistency and Claude understands better what it can or can’t do.

Results so far: The project has 496 passing unit tests, queries running at an average of 2.80ms, and I’ve managed to keep everything well organized. The separation of responsibilities helps a lot - the Claude that plans isn’t the same one that executes, so there’s no context loss.

And you, how do you use Claude Code day-to-day? Do you go straight to implementation or do you also have a structured workflow? Does anyone else use automation systems to keep context updated? Curious to know how you solve these challenges.​​​​​​​​​​​​​​​​

🌐
Reddit
reddit.com › r/claudeai › introducing two new ways to learn in claude code and the claude app
r/ClaudeAI on Reddit: Introducing two new ways to learn in Claude Code and the Claude app
July 11, 2025 -

Today we're launching two new ways to learn in Claude Code and the Claude app.

First, Claude Code now lets you customize communication styles with /output-style.

We're introducing two output style presets to help developers and students build skills:

  • Explanatory: Claude explains its reasoning behind architectural decisions, explaining tradeoffs, and teaching you best practices as it codes.

  • Learning: You take turns coding with Claude. It's like pair programming with a mentor that helps you learn while getting real work done.

And for all app users: We're bringing our Learning style (previously exclusive to Claude for Education) to everyone. Select the 'Learning' style in any chat to have Claude guide you through hard concepts rather than providing immediate answers.

Learn more: https://docs.anthropic.com/en/docs/claude-code/output-styles

🌐
Reddit
reddit.com › r/claudeai › plan mode - claude code stealth update
r/ClaudeAI on Reddit: Plan Mode - Claude Code Stealth Update
March 28, 2025 -

Claude Code has just stealthily integrated a plan mode by hitting shift+tab once more after enabling auto-updates. No files editable, purely read & think. No documentation or release notes anywhere yet, as far as I can see.

Likely based on this GitHub issue (and other demand) https://github.com/anthropics/claude-code/issues/982

🌐
Reddit
reddit.com › r/claudeai › claude code is a beast – tips from 6 months of hardcore use
r/ClaudeAI on Reddit: Claude Code is a Beast – Tips from 6 Months of Hardcore Use
October 31, 2025 -

Quick pro-tip from a fellow lazy person: You can throw this book of a post into one of the many text-to-speech AI services like ElevenLabs Reader or Natural Reader and have it read the post for you :)

Edit: Many of you are asking for a repo so I will make an effort to get one up in the next couple days. All of this is a part of a work project at the moment, so I have to take some time to copy everything into a fresh project and scrub any identifying info. I will post the link here when it's up. You can also follow me and I will post it on my profile so you get notified. Thank you all for the kind comments. I'm happy to share this info with others since I don't get much chance to do so in my day-to-day.

Edit (final?): I bit the bullet and spent the afternoon getting a github repo up for you guys. Just made a post with some additional info here or you can go straight to the source:

🎯 Repository: https://github.com/diet103/claude-code-infrastructure-showcase

Disclaimer

I made a post about six months ago sharing my experience after a week of hardcore use with Claude Code. It's now been about six months of hardcore use, and I would like to share some more tips, tricks, and word vomit with you all. I may have went a little overboard here so strap in, grab a coffee, sit on the toilet or whatever it is you do when doom-scrolling reddit.

I want to start the post off with a disclaimer: all the content within this post is merely me sharing what setup is working best for me currently and should not be taken as gospel or the only correct way to do things. It's meant to hopefully inspire you to improve your setup and workflows with AI agentic coding. I'm just a guy, and this is just like, my opinion, man.

Also, I'm on the 20x Max plan, so your mileage may vary. And if you're looking for vibe-coding tips, you should look elsewhere. If you want the best out of CC, then you should be working together with it: planning, reviewing, iterating, exploring different approaches, etc.

Quick Overview

After 6 months of pushing Claude Code to its limits (solo rewriting 300k LOC), here's the system I built:

  • Skills that actually auto-activate when needed

  • Dev docs workflow that prevents Claude from losing the plot

  • PM2 + hooks for zero-errors-left-behind

  • Army of specialized agents for reviews, testing, and planning

Let's get into it.

Background

I'm a software engineer who has been working on production web apps for the last seven years or so. And I have fully embraced the wave of AI with open arms. I'm not too worried about AI taking my job anytime soon, as it is a tool that I use to leverage my capabilities. In doing so, I have been building MANY new features and coming up with all sorts of new proposal presentations put together with Claude and GPT-5 Thinking to integrate new AI systems into our production apps. Projects I would have never dreamt of having the time to even consider before integrating AI into my workflow. And with all that, I'm giving myself a good deal of job security and have become the AI guru at my job since everyone else is about a year or so behind on how they're integrating AI into their day-to-day.

With my newfound confidence, I proposed a pretty large redesign/refactor of one of our web apps used as an internal tool at work. This was a pretty rough college student-made project that was forked off another project developed by me as an intern (created about 7 years ago and forked 4 years ago). This may have been a bit overly ambitious of me since, to sell it to the stakeholders, I agreed to finish a top-down redesign of this fairly decent-sized project (~100k LOC) in a matter of a few months...all by myself. I knew going in that I was going to have to put in extra hours to get this done, even with the help of CC. But deep down, I know it's going to be a hit, automating several manual processes and saving a lot of time for a lot of people at the company.

It's now six months later... yeah, I probably should not have agreed to this timeline. I have tested the limits of both Claude as well as my own sanity trying to get this thing done. I completely scrapped the old frontend, as everything was seriously outdated and I wanted to play with the latest and greatest. I'm talkin' React 16 JS → React 19 TypeScript, React Query v2 → TanStack Query v5, React Router v4 w/ hashrouter → TanStack Router w/ file-based routing, Material UI v4 → MUI v7, all with strict adherence to best practices. The project is now at ~300-400k LOC and my life expectancy ~5 years shorter. It's finally ready to put up for testing, and I am incredibly happy with how things have turned out.

This used to be a project with insurmountable tech debt, ZERO test coverage, HORRIBLE developer experience (testing things was an absolute nightmare), and all sorts of jank going on. I addressed all of those issues with decent test coverage, manageable tech debt, and implemented a command-line tool for generating test data as well as a dev mode to test different features on the frontend. During this time, I have gotten to know CC's abilities and what to expect out of it.

A Note on Quality and Consistency

I've noticed a recurring theme in forums and discussions - people experiencing frustration with usage limits and concerns about output quality declining over time. I want to be clear up front: I'm not here to dismiss those experiences or claim it's simply a matter of "doing it wrong." Everyone's use cases and contexts are different, and valid concerns deserve to be heard.

That said, I want to share what's been working for me. In my experience, CC's output has actually improved significantly over the last couple of months, and I believe that's largely due to the workflow I've been constantly refining. My hope is that if you take even a small bit of inspiration from my system and integrate it into your CC workflow, you'll give it a better chance at producing quality output that you're happy with.

Now, let's be real - there are absolutely times when Claude completely misses the mark and produces suboptimal code. This can happen for various reasons. First, AI models are stochastic, meaning you can get widely varying outputs from the same input. Sometimes the randomness just doesn't go your way, and you get an output that's legitimately poor quality through no fault of your own. Other times, it's about how the prompt is structured. There can be significant differences in outputs given slightly different wording because the model takes things quite literally. If you misword or phrase something ambiguously, it can lead to vastly inferior results.

Sometimes You Just Need to Step In

Look, AI is incredible, but it's not magic. There are certain problems where pattern recognition and human intuition just win. If you've spent 30 minutes watching Claude struggle with something that you could fix in 2 minutes, just fix it yourself. No shame in that. Think of it like teaching someone to ride a bike, sometimes you just need to steady the handlebars for a second before letting go again.

I've seen this especially with logic puzzles or problems that require real-world common sense. AI can brute-force a lot of things, but sometimes a human just "gets it" faster. Don't let stubbornness or some misguided sense of "but the AI should do everything" waste your time. Step in, fix the issue, and keep moving.

I've had my fair share of terrible prompting, which usually happens towards the end of the day where I'm getting lazy and I'm not putting that much effort into my prompts. And the results really show. So next time you are having these kinds of issues where you think the output is way worse these days because you think Anthropic shadow-nerfed Claude, I encourage you to take a step back and reflect on how you are prompting.

Re-prompt often. You can hit double-esc to bring up your previous prompts and select one to branch from. You'd be amazed how often you can get way better results armed with the knowledge of what you don't want when giving the same prompt. All that to say, there can be many reasons why the output quality seems to be worse, and it's good to self-reflect and consider what you can do to give it the best possible chance to get the output you want.

As some wise dude somewhere probably said, "Ask not what Claude can do for you, ask what context you can give to Claude" ~ Wise Dude

Alright, I'm going to step down from my soapbox now and get on to the good stuff.

My System

I've implemented a lot changes to my workflow as it relates to CC over the last 6 months, and the results have been pretty great, IMO.

Skills Auto-Activation System (Game Changer!)

This one deserves its own section because it completely transformed how I work with Claude Code.

The Problem

So Anthropic releases this Skills feature, and I'm thinking "this looks awesome!" The idea of having these portable, reusable guidelines that Claude can reference sounded perfect for maintaining consistency across my massive codebase. I spent a good chunk of time with Claude writing up comprehensive skills for frontend development, backend development, database operations, workflow management, etc. We're talking thousands of lines of best practices, patterns, and examples.

And then... nothing. Claude just wouldn't use them. I'd literally use the exact keywords from the skill descriptions. Nothing. I'd work on files that should trigger the skills. Nothing. It was incredibly frustrating because I could see the potential, but the skills just sat there like expensive decorations.

The "Aha!" Moment

That's when I had the idea of using hooks. If Claude won't automatically use skills, what if I built a system that MAKES it check for relevant skills before doing anything?

So I dove into Claude Code's hook system and built a multi-layered auto-activation architecture with TypeScript hooks. And it actually works!

How It Works

I created two main hooks:

1. UserPromptSubmit Hook (runs BEFORE Claude sees your message):

  • Analyzes your prompt for keywords and intent patterns

  • Checks which skills might be relevant

  • Injects a formatted reminder into Claude's context

  • Now when I ask "how does the layout system work?" Claude sees a big "🎯 SKILL ACTIVATION CHECK - Use project-catalog-developer skill" (project catalog is a large complex data grid based feature on my front end) before even reading my question

2. Stop Event Hook (runs AFTER Claude finishes responding):

  • Analyzes which files were edited

  • Checks for risky patterns (try-catch blocks, database operations, async functions)

  • Displays a gentle self-check reminder

  • "Did you add error handling? Are Prisma operations using the repository pattern?"

  • Non-blocking, just keeps Claude aware without being annoying

skill-rules.json Configuration

I created a central configuration file that defines every skill with:

  • Keywords: Explicit topic matches ("layout", "workflow", "database")

  • Intent patterns: Regex to catch actions ("(create|add).*?(feature|route)")

  • File path triggers: Activates based on what file you're editing

  • Content triggers: Activates if file contains specific patterns (Prisma imports, controllers, etc.)

Example snippet:

{
  "backend-dev-guidelines": {
    "type": "domain",
    "enforcement": "suggest",
    "priority": "high",
    "promptTriggers": {
      "keywords": ["backend", "controller", "service", "API", "endpoint"],
      "intentPatterns": [
        "(create|add).*?(route|endpoint|controller)",
        "(how to|best practice).*?(backend|API)"
      ]
    },
    "fileTriggers": {
      "pathPatterns": ["backend/src/**/*.ts"],
      "contentPatterns": ["router\\.", "export.*Controller"]
    }
  }
}

The Results

Now when I work on backend code, Claude automatically:

  1. Sees the skill suggestion before reading my prompt

  2. Loads the relevant guidelines

  3. Actually follows the patterns consistently

  4. Self-checks at the end via gentle reminders

The difference is night and day. No more inconsistent code. No more "wait, Claude used the old pattern again." No more manually telling it to check the guidelines every single time.

Following Anthropic's Best Practices (The Hard Way)

After getting the auto-activation working, I dove deeper and found Anthropic's official best practices docs. Turns out I was doing it wrong because they recommend keeping the main SKILL.md file under 500 lines and using progressive disclosure with resource files.

Whoops. My frontend-dev-guidelines skill was 1,500+ lines. And I had a couple other skills over 1,000 lines. These monolithic files were defeating the whole purpose of skills (loading only what you need).

So I restructured everything:

  • frontend-dev-guidelines: 398-line main file + 10 resource files

  • backend-dev-guidelines: 304-line main file + 11 resource files

Now Claude loads the lightweight main file initially, and only pulls in detailed resource files when actually needed. Token efficiency improved 40-60% for most queries.

Skills I've Created

Here's my current skill lineup:

Guidelines & Best Practices:

  • backend-dev-guidelines - Routes → Controllers → Services → Repositories

  • frontend-dev-guidelines - React 19, MUI v7, TanStack Query/Router patterns

  • skill-developer - Meta-skill for creating more skills

Domain-Specific:

  • workflow-developer - Complex workflow engine patterns

  • notification-developer - Email/notification system

  • database-verification - Prevent column name errors (this one is a guardrail that actually blocks edits!)

  • project-catalog-developer - DataGrid layout system

All of these automatically activate based on what I'm working on. It's like having a senior dev who actually remembers all the patterns looking over Claude's shoulder.

Why This Matters

Before skills + hooks:

  • Claude would use old patterns even though I documented new ones

  • Had to manually tell Claude to check BEST_PRACTICES.md every time

  • Inconsistent code across the 300k+ LOC codebase

  • Spent too much time fixing Claude's "creative interpretations"

After skills + hooks:

  • Consistent patterns automatically enforced

  • Claude self-corrects before I even see the code

  • Can trust that guidelines are being followed

  • Way less time spent on reviews and fixes

If you're working on a large codebase with established patterns, I cannot recommend this system enough. The initial setup took a couple of days to get right, but it's paid for itself ten times over.

CLAUDE.md and Documentation Evolution

In a post I wrote 6 months ago, I had a section about rules being your best friend, which I still stand by. But my CLAUDE.md file was quickly getting out of hand and was trying to do too much. I also had this massive BEST_PRACTICES.md file (1,400+ lines) that Claude would sometimes read and sometimes completely ignore.

So I took an afternoon with Claude to consolidate and reorganize everything into a new system. Here's what changed:

What Moved to Skills

Previously, BEST_PRACTICES.md contained:

  • TypeScript standards

  • React patterns (hooks, components, suspense)

  • Backend API patterns (routes, controllers, services)

  • Error handling (Sentry integration)

  • Database patterns (Prisma usage)

  • Testing guidelines

  • Performance optimization

All of that is now in skills with the auto-activation hook ensuring Claude actually uses them. No more hoping Claude remembers to check BEST_PRACTICES.md.

What Stayed in CLAUDE.md

Now CLAUDE.md is laser-focused on project-specific info (only ~200 lines):

  • Quick commands (pnpm pm2:start, pnpm build, etc.)

  • Service-specific configuration

  • Task management workflow (dev docs system)

  • Testing authenticated routes

  • Workflow dry-run mode

  • Browser tools configuration

The New Structure

Root CLAUDE.md (100 lines)
├── Critical universal rules
├── Points to repo-specific claude.md files
└── References skills for detailed guidelines

Each Repo's claude.md (50-100 lines)
├── Quick Start section pointing to:
│   ├── PROJECT_KNOWLEDGE.md - Architecture & integration
│   ├── TROUBLESHOOTING.md - Common issues
│   └── Auto-generated API docs
└── Repo-specific quirks and commands

The magic: Skills handle all the "how to write code" guidelines, and CLAUDE.md handles "how this specific project works." Separation of concerns for the win.

Dev Docs System

This system, out of everything (besides skills), I think has made the most impact on the results I'm getting out of CC. Claude is like an extremely confident junior dev with extreme amnesia, losing track of what they're doing easily. This system is aimed at solving those shortcomings.

The dev docs section from my CLAUDE.md:

### Starting Large Tasks

When exiting plan mode with an accepted plan: 1.**Create Task Directory**:
mkdir -p ~/git/project/dev/active/[task-name]/

2.**Create Documents**:

- `[task-name]-plan.md` - The accepted plan
- `[task-name]-context.md` - Key files, decisions
- `[task-name]-tasks.md` - Checklist of work

3.**Update Regularly**: Mark tasks complete immediately

### Continuing Tasks

- Check `/dev/active/` for existing tasks
- Read all three files before proceeding
- Update "Last Updated" timestamps

These are documents that always get created for every feature or large task. Before using this system, I had many times when I all of a sudden realized that Claude had lost the plot and we were no longer implementing what we had planned out 30 minutes earlier because we went off on some tangent for whatever reason.

My Planning Process

My process starts with planning. Planning is king. If you aren't at a minimum using planning mode before asking Claude to implement something, you're gonna have a bad time, mmm'kay. You wouldn't have a builder come to your house and start slapping on an addition without having him draw things up first.

When I start planning a feature, I put it into planning mode, even though I will eventually have Claude write the plan down in a markdown file. I'm not sure putting it into planning mode necessary, but to me, it feels like planning mode gets better results doing the research on your codebase and getting all the correct context to be able to put together a plan.

I created a strategic-plan-architect subagent that's basically a planning beast. It:

  • Gathers context efficiently

  • Analyzes project structure

  • Creates comprehensive structured plans with executive summary, phases, tasks, risks, success metrics, timelines

  • Generates three files automatically: plan, context, and tasks checklist

But I find it really annoying that you can't see the agent's output, and even more annoying is if you say no to the plan, it just kills the agent instead of continuing to plan. So I also created a custom slash command (/dev-docs) with the same prompt to use on the main CC instance.

Once Claude spits out that beautiful plan, I take time to review it thoroughly. This step is really important. Take time to understand it, and you'd be surprised at how often you catch silly mistakes or Claude misunderstanding a very vital part of the request or task.

More often than not, I'll be at 15% context left or less after exiting plan mode. But that's okay because we're going to put everything we need to start fresh into our dev docs. Claude usually likes to just jump in guns blazing, so I immediately slap the ESC key to interrupt and run my /dev-docs slash command. The command takes the approved plan and creates all three files, sometimes doing a bit more research to fill in gaps if there's enough context left.

And once I'm done with that, I'm pretty much set to have Claude fully implement the feature without getting lost or losing track of what it was doing, even through an auto-compaction. I just make sure to remind Claude every once in a while to update the tasks as well as the context file with any relevant context. And once I'm running low on context in the current session, I just run my slash command /update-dev-docs. Claude will note any relevant context (with next steps) as well as mark any completed tasks or add new tasks before I compact the conversation. And all I need to say is "continue" in the new session.

During implementation, depending on the size of the feature or task, I will specifically tell Claude to only implement one or two sections at a time. That way, I'm getting the chance to go in and review the code in between each set of tasks. And periodically, I have a subagent also reviewing the changes so I can catch big mistakes early on. If you aren't having Claude review its own code, then I highly recommend it because it saved me a lot of headaches catching critical errors, missing implementations, inconsistent code, and security flaws.

PM2 Process Management (Backend Debugging Game Changer)

This one's a relatively recent addition, but it's made debugging backend issues so much easier.

The Problem

My project has seven backend microservices running simultaneously. The issue was that Claude didn't have access to view the logs while services were running. I couldn't just ask "what's going wrong with the email service?" - Claude couldn't see the logs without me manually copying and pasting them into chat.

The Intermediate Solution

For a while, I had each service write its output to a timestamped log file using a devLog script. This worked... okay. Claude could read the log files, but it was clunky. Logs weren't real-time, services wouldn't auto-restart on crashes, and managing everything was a pain.

The Real Solution: PM2

Then I discovered PM2, and it was a game changer. I configured all my backend services to run via PM2 with a single command: pnpm pm2:start

What this gives me:

  • Each service runs as a managed process with its own log file

  • Claude can easily read individual service logs in real-time

  • Automatic restarts on crashes

  • Real-time monitoring with pm2 logs

  • Memory/CPU monitoring with pm2 monit

  • Easy service management (pm2 restart email, pm2 stop all, etc.)

PM2 Configuration:

// ecosystem.config.jsmodule.exports = {
  apps: [
    {
      name: 'form-service',
      script: 'npm',
      args: 'start',
      cwd: './form',
      error_file: './form/logs/error.log',
      out_file: './form/logs/out.log',
    },
// ... 6 more services
  ]
};

Before PM2:

Me: "The email service is throwing errors"
Me: [Manually finds and copies logs]
Me: [Pastes into chat]
Claude: "Let me analyze this..."

The debugging workflow now:

Me: "The email service is throwing errors"
Claude: [Runs] pm2 logs email --lines 200
Claude: [Reads the logs] "I see the issue - database connection timeout..."
Claude: [Runs] pm2 restart email
Claude: "Restarted the service, monitoring for errors..."

Night and day difference. Claude can autonomously debug issues now without me being a human log-fetching service.

One caveat: Hot reload doesn't work with PM2, so I still run the frontend separately with pnpm dev. But for backend services that don't need hot reload as often, PM2 is incredible.

Hooks System (#NoMessLeftBehind)

The project I'm working on is multi-root and has about eight different repos in the root project directory. One for the frontend and seven microservices and utilities for the backend. I'm constantly bouncing around making changes in a couple of repos at a time depending on the feature.

And one thing that would annoy me to no end is when Claude forgets to run the build command in whatever repo it's editing to catch errors. And it will just leave a dozen or so TypeScript errors without me catching it. Then a couple of hours later I see Claude running a build script like a good boy and I see the output: "There are several TypeScript errors, but they are unrelated, so we're all good here!"

No, we are not good, Claude.

Hook #1: File Edit Tracker

First, I created a post-tool-use hook that runs after every Edit/Write/MultiEdit operation. It logs:

  • Which files were edited

  • What repo they belong to

  • Timestamps

Initially, I made it run builds immediately after each edit, but that was stupidly inefficient. Claude makes edits that break things all the time before quickly fixing them.

Hook #2: Build Checker

Then I added a Stop hook that runs when Claude finishes responding. It:

  1. Reads the edit logs to find which repos were modified

  2. Runs build scripts on each affected repo

  3. Checks for TypeScript errors

  4. If < 5 errors: Shows them to Claude

  5. If ≥ 5 errors: Recommends launching auto-error-resolver agent

  6. Logs everything for debugging

Since implementing this system, I've not had a single instance where Claude has left errors in the code for me to find later. The hook catches them immediately, and Claude fixes them before moving on.

Hook #3: Prettier Formatter

This one's simple but effective. After Claude finishes responding, automatically format all edited files with Prettier using the appropriate .prettierrc config for that repo.

No more going into to manually edit a file just to have prettier run and produce 20 changes because Claude decided to leave off trailing commas last week when we created that file.

⚠️ Update: I No Longer Recommend This Hook

After publishing, a reader shared detailed data showing that file modifications trigger <system-reminder> notifications that can consume significant context tokens. In their case, Prettier formatting led to 160k tokens consumed in just 3 rounds due to system-reminders showing file diffs.

While the impact varies by project (large files and strict formatting rules are worst-case scenarios), I'm removing this hook from my setup. It's not a big deal to let formatting happen when you manually edit files anyway, and the potential token cost isn't worth the convenience.

If you want automatic formatting, consider running Prettier manually between sessions instead of during Claude conversations.

Hook #4: Error Handling Reminder

This is the gentle philosophy hook I mentioned earlier:

  • Analyzes edited files after Claude finishes

  • Detects risky patterns (try-catch, async operations, database calls, controllers)

  • Shows a gentle reminder if risky code was written

  • Claude self-assesses whether error handling is needed

  • No blocking, no friction, just awareness

Example output:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 ERROR HANDLING SELF-CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

⚠️  Backend Changes Detected
   2 file(s) edited

   ❓ Did you add Sentry.captureException() in catch blocks?
   ❓ Are Prisma operations wrapped in error handling?

   💡 Backend Best Practice:
      - All errors should be captured to Sentry
      - Controllers should extend BaseController
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

The Complete Hook Pipeline

Here's what happens on every Claude response now:

Claude finishes responding
  ↓
Hook 1: Prettier formatter runs → All edited files auto-formatted
  ↓
Hook 2: Build checker runs → TypeScript errors caught immediately
  ↓
Hook 3: Error reminder runs → Gentle self-check for error handling
  ↓
If errors found → Claude sees them and fixes
  ↓
If too many errors → Auto-error-resolver agent recommended
  ↓
Result: Clean, formatted, error-free code

And the UserPromptSubmit hook ensures Claude loads relevant skills BEFORE even starting work.

No mess left behind. It's beautiful.

Scripts Attached to Skills

One really cool pattern I picked up from Anthropic's official skill examples on GitHub: attach utility scripts to skills.

For example, my backend-dev-guidelines skill has a section about testing authenticated routes. Instead of just explaining how authentication works, the skill references an actual script:

### Testing Authenticated Routes

Use the provided test-auth-route.js script:


node scripts/test-auth-route.js http://localhost:3002/api/endpoint

The script handles all the complex authentication steps for you:

  1. Gets a refresh token from Keycloak

  2. Signs the token with JWT secret

  3. Creates cookie header

  4. Makes authenticated request

When Claude needs to test a route, it knows exactly what script to use and how to use it. No more "let me create a test script" and reinventing the wheel every time.

I'm planning to expand this pattern - attach more utility scripts to relevant skills so Claude has ready-to-use tools instead of generating them from scratch.

Tools and Other Things

SuperWhisper on Mac

Voice-to-text for prompting when my hands are tired from typing. Works surprisingly well, and Claude understands my rambling voice-to-text surprisingly well.

Memory MCP

I use this less over time now that skills handle most of the "remembering patterns" work. But it's still useful for tracking project-specific decisions and architectural choices that don't belong in skills.

BetterTouchTool

  • Relative URL copy from Cursor (for sharing code references)

    • I have VSCode open to more easily find the files I’m looking for and I can double tap CAPS-LOCK, then BTT inputs the shortcut to copy relative URL, transforms the clipboard contents by prepending an ‘@’ symbol, focuses the terminal, and then pastes the file path. All in one.

  • Double-tap hotkeys to quickly focus apps (CMD+CMD = Claude Code, OPT+OPT = Browser)

  • Custom gestures for common actions

Honestly, the time savings on just not fumbling between apps is worth the BTT purchase alone.

Scripts for Everything

If there's any annoying tedious task, chances are there's a script for that:

  • Command-line tool to generate mock test data. Before using Claude code, it was extremely annoying to generate mock data because I would have to make a submission to a form that had about a 120 questions Just to generate one single test submission.

  • Authentication testing scripts (get tokens, test routes)

  • Database resetting and seeding

  • Schema diff checker before migrations

  • Automated backup and restore for dev database

Pro tip: When Claude helps you write a useful script, immediately document it in CLAUDE.md or attach it to a relevant skill. Future you will thank past you.

Documentation (Still Important, But Evolved)

I think next to planning, documentation is almost just as important. I document everything as I go in addition to the dev docs that are created for each task or feature. From system architecture to data flow diagrams to actual developer docs and APIs, just to name a few.

But here's what changed: Documentation now works WITH skills, not instead of them.

Skills contain: Reusable patterns, best practices, how-to guides Documentation contains: System architecture, data flows, API references, integration points

For example:

  • "How to create a controller" → backend-dev-guidelines skill

  • "How our workflow engine works" → Architecture documentation

  • "How to write React components" → frontend-dev-guidelines skill

  • "How notifications flow through the system" → Data flow diagram + notification skill

I still have a LOT of docs (850+ markdown files), but now they're laser-focused on project-specific architecture rather than repeating general best practices that are better served by skills.

You don't necessarily have to go that crazy, but I highly recommend setting up multiple levels of documentation. Ones for broad architectural overview of specific services, wherein you'll include paths to other documentation that goes into more specifics of different parts of the architecture. It will make a major difference on Claude's ability to easily navigate your codebase.

Prompt Tips

When you're writing out your prompt, you should try to be as specific as possible about what you are wanting as a result. Once again, you wouldn't ask a builder to come out and build you a new bathroom without at least discussing plans, right?

"You're absolutely right! Shag carpet probably is not the best idea to have in a bathroom."

Sometimes you might not know the specifics, and that's okay. If you don't ask questions, tell Claude to research and come back with several potential solutions. You could even use a specialized subagent or use any other AI chat interface to do your research. The world is your oyster. I promise you this will pay dividends because you will be able to look at the plan that Claude has produced and have a better idea if it's good, bad, or needs adjustments. Otherwise, you're just flying blind, pure vibe-coding. Then you're gonna end up in a situation where you don't even know what context to include because you don't know what files are related to the thing you're trying to fix.

Try not to lead in your prompts if you want honest, unbiased feedback. If you're unsure about something Claude did, ask about it in a neutral way instead of saying, "Is this good or bad?" Claude tends to tell you what it thinks you want to hear, so leading questions can skew the response. It's better to just describe the situation and ask for thoughts or alternatives. That way, you'll get a more balanced answer.

Agents, Hooks, and Slash Commands (The Holy Trinity)

Agents

I've built a small army of specialized agents:

Quality Control:

  • code-architecture-reviewer - Reviews code for best practices adherence

  • build-error-resolver - Systematically fixes TypeScript errors

  • refactor-planner - Creates comprehensive refactoring plans

Testing & Debugging:

  • auth-route-tester - Tests backend routes with authentication

  • auth-route-debugger - Debugs 401/403 errors and route issues

  • frontend-error-fixer - Diagnoses and fixes frontend errors

Planning & Strategy:

  • strategic-plan-architect - Creates detailed implementation plans

  • plan-reviewer - Reviews plans before implementation

  • documentation-architect - Creates/updates documentation

Specialized:

  • frontend-ux-designer - Fixes styling and UX issues

  • web-research-specialist - Researches issues along with many other things on the web

  • reactour-walkthrough-designer - Creates UI tours

The key with agents is to give them very specific roles and clear instructions on what to return. I learned this the hard way after creating agents that would go off and do who-knows-what and come back with "I fixed it!" without telling me what they fixed.

Hooks (Covered Above)

The hook system is honestly what ties everything together. Without hooks:

  • Skills sit unused

  • Errors slip through

  • Code is inconsistently formatted

  • No automatic quality checks

With hooks:

  • Skills auto-activate

  • Zero errors left behind

  • Automatic formatting

  • Quality awareness built-in

Slash Commands

I have quite a few custom slash commands, but these are the ones I use most:

Planning & Docs:

  • /dev-docs - Create comprehensive strategic plan

  • /dev-docs-update - Update dev docs before compaction

  • /create-dev-docs - Convert approved plan to dev doc files

Quality & Review:

  • /code-review - Architectural code review

  • /build-and-fix - Run builds and fix all errors

Testing:

  • /route-research-for-testing - Find affected routes and launch tests

  • /test-route - Test specific authenticated routes

The beauty of slash commands is they expand into full prompts, so you can pack a ton of context and instructions into a simple command. Way better than typing out the same instructions every time.

Conclusion

After six months of hardcore use, here's what I've learned:

The Essentials:

  1. Plan everything - Use planning mode or strategic-plan-architect

  2. Skills + Hooks - Auto-activation is the only way skills actually work reliably

  3. Dev docs system - Prevents Claude from losing the plot

  4. Code reviews - Have Claude review its own work

  5. PM2 for backend - Makes debugging actually bearable

The Nice-to-Haves:

  • Specialized agents for common tasks

  • Slash commands for repeated workflows

  • Comprehensive documentation

  • Utility scripts attached to skills

  • Memory MCP for decisions

And that's about all I can think of for now. Like I said, I'm just some guy, and I would love to hear tips and tricks from everybody else, as well as any criticisms. Because I'm always up for improving upon my workflow. I honestly just wanted to share what's working for me with other people since I don't really have anybody else to share this with IRL (my team is very small, and they are all very slow getting on the AI train).

If you made it this far, thanks for taking the time to read. If you have questions about any of this stuff or want more details on implementation, happy to share. The hooks and skills system especially took some trial and error to get right, but now that it's working, I can't imagine going back.

TL;DR: Built an auto-activation system for Claude Code skills using TypeScript hooks, created a dev docs workflow to prevent context loss, and implemented PM2 + automated error checking. Result: Solo rewrote 300k LOC in 6 months with consistent quality.

🌐
Reddit
reddit.com › r/claudeai › claude learning mode is amazing.
r/ClaudeAI on Reddit: Claude learning mode is amazing.
August 20, 2025 -

I recently setup a home lab server using old hardware wanting to learn more about system management, networking, ect. Claude is hands down an amazing teacher in learning mode. It’s amazing how personalized the responses are and how it can keep memory of analogy.

For anyone wanting a one-on-one trainer for a home lab server, I highly suggest Claude learning mode.

Find elsewhere
🌐
Reddit
reddit.com › r/claudecode › claude code is a beast – tips from 6 months of hardcore use
r/ClaudeCode on Reddit: Claude Code is a Beast – Tips from 6 Months of Hardcore Use
October 31, 2025 -

Edit: Many of you are asking for a repo so I will make an effort to get one up in the next couple days. All of this is a part of a work project at the moment, so I have to take some time to copy everything into a fresh project and scrub any identifying info. I will post the link here when it's up. You can also follow me and I will post it on my profile so you get notified. Thank you all for the kind comments. I'm happy to share this info with others since I don't get much chance to do so in my day-to-day.

Edit (final?): I bit the bullet and spent the afternoon getting a github repo up for you guys. Just made a post with some additional info here or you can go straight to the source:

🎯 Repository: https://github.com/diet103/claude-code-infrastructure-showcase

Quick tip from a fellow lazy person: You can throw this book of a post into one of the many text-to-speech AI services like ElevenLabs Reader or Natural Reader and have it read the post for you :)

Disclaimer

I made a post about six months ago sharing my experience after a week of hardcore use with Claude Code. It's now been about six months of hardcore use, and I would like to share some more tips, tricks, and word vomit with you all. I may have went a little overboard here so strap in, grab a coffee, sit on the toilet or whatever it is you do when doom-scrolling reddit.

I want to start the post off with a disclaimer: all the content within this post is merely me sharing what setup is working best for me currently and should not be taken as gospel or the only correct way to do things. It's meant to hopefully inspire you to improve your setup and workflows with AI agentic coding. I'm just a guy, and this is just like, my opinion, man.

Also, I'm on the 20x Max plan, so your mileage may vary. And if you're looking for vibe-coding tips, you should look elsewhere. If you want the best out of CC, then you should be working together with it: planning, reviewing, iterating, exploring different approaches, etc.

Quick Overview

After 6 months of pushing Claude Code to its limits (solo rewriting 300k LOC), here's the system I built:

  • Skills that actually auto-activate when needed

  • Dev docs workflow that prevents Claude from losing the plot

  • PM2 + hooks for zero-errors-left-behind

  • Army of specialized agents for reviews, testing, and planning Let's get into it.

Background

I'm a software engineer who has been working on production web apps for the last seven years or so. And I have fully embraced the wave of AI with open arms. I'm not too worried about AI taking my job anytime soon, as it is a tool that I use to leverage my capabilities. In doing so, I have been building MANY new features and coming up with all sorts of new proposal presentations put together with Claude and GPT-5 Thinking to integrate new AI systems into our production apps. Projects I would have never dreamt of having the time to even consider before integrating AI into my workflow. And with all that, I'm giving myself a good deal of job security and have become the AI guru at my job since everyone else is about a year or so behind on how they're integrating AI into their day-to-day.

With my newfound confidence, I proposed a pretty large redesign/refactor of one of our web apps used as an internal tool at work. This was a pretty rough college student-made project that was forked off another project developed by me as an intern (created about 7 years ago and forked 4 years ago). This may have been a bit overly ambitious of me since, to sell it to the stakeholders, I agreed to finish a top-down redesign of this fairly decent-sized project (~100k LOC) in a matter of two to three months...all by myself. I knew going in that I was going to have to put in extra hours to get this done, even with the help of CC. But deep down, I know it's going to be a hit, automating several manual processes and saving a lot of time for a lot of people at the company.

It's now six months later... yeah, I probably should not have agreed to this timeline. I have tested the limits of both Claude as well as my own sanity trying to get this thing done. I completely scrapped the old frontend, as everything was seriously outdated and I wanted to play with the latest and greatest. I'm talkin' React 16 JS → React 19 TypeScript, React Query v2 → TanStack Query v5, React Router v4 w/ hashrouter → TanStack Router w/ file-based routing, Material UI v4 → MUI v7, all with strict adherence to best practices. The project is now at ~300-400k LOC and my life expectancy ~5 years shorter. It's finally ready to put up for testing, and I am incredibly happy with how things have turned out.

This used to be a project with insurmountable tech debt, ZERO test coverage, HORRIBLE developer experience (testing things was an absolute nightmare), and all sorts of jank going on. I addressed all of those issues with decent test coverage, manageable tech debt, and implemented a command-line tool for generating test data as well as a dev mode to test different features on the frontend. During this time, I have gotten to know CC's abilities and what to expect out of it.

A Note on Quality and Consistency

I've noticed a recurring theme in forums and discussions - people experiencing frustration with usage limits and concerns about output quality declining over time. I want to be clear up front: I'm not here to dismiss those experiences or claim it's simply a matter of "doing it wrong." Everyone's use cases and contexts are different, and valid concerns deserve to be heard.

That said, I want to share what's been working for me. In my experience, CC's output has actually improved significantly over the last couple of months, and I believe that's largely due to the workflow I've been constantly refining. My hope is that if you take even a small bit of inspiration from my system and integrate it into your CC workflow, you'll give it a better chance at producing quality output that you're happy with.

Now, let's be real - there are absolutely times when Claude completely misses the mark and produces suboptimal code. This can happen for various reasons. First, AI models are stochastic, meaning you can get widely varying outputs from the same input. Sometimes the randomness just doesn't go your way, and you get an output that's legitimately poor quality through no fault of your own. Other times, it's about how the prompt is structured. There can be significant differences in outputs given slightly different wording because the model takes things quite literally. If you misword or phrase something ambiguously, it can lead to vastly inferior results.

Sometimes You Just Need to Step In

Look, AI is incredible, but it's not magic. There are certain problems where pattern recognition and human intuition just win. If you've spent 30 minutes watching Claude struggle with something that you could fix in 2 minutes, just fix it yourself. No shame in that. Think of it like teaching someone to ride a bike - sometimes you just need to steady the handlebars for a second before letting go again.

I've seen this especially with logic puzzles or problems that require real-world common sense. AI can brute-force a lot of things, but sometimes a human just "gets it" faster. Don't let stubbornness or some misguided sense of "but the AI should do everything" waste your time. Step in, fix the issue, and keep moving.

I've had my fair share of terrible prompting, which usually happens towards the end of the day where I'm getting lazy and I'm not putting that much effort into my prompts. And the results really show. So next time you are having these kinds of issues where you think the output is way worse these days because you think Anthropic shadow-nerfed Claude, I encourage you to take a step back and reflect on how you are prompting.

Re-prompt often. You can hit double-esc to bring up your previous prompts and select one to branch from. You'd be amazed how often you can get way better results armed with the knowledge of what you don't want when giving the same prompt. All that to say, there can be many reasons why the output quality seems to be worse, and it's good to self-reflect and consider what you can do to give it the best possible chance to get the output you want.

As some wise dude somewhere probably said, "Ask not what Claude can do for you, ask what context you can give to Claude" ~ Wise Dude

Alright, I'm going to step down from my soapbox now and get on to the good stuff.

My System

I've implemented a lot changes to my workflow as it relates to CC over the last 6 months, and the results have been pretty great, IMO.

Skills Auto-Activation System (Game Changer!)

This one deserves its own section because it completely transformed how I work with Claude Code.

The Problem

So Anthropic releases this Skills feature, and I'm thinking "this looks awesome!" The idea of having these portable, reusable guidelines that Claude can reference sounded perfect for maintaining consistency across my massive codebase. I spent a good chunk of time with Claude writing up comprehensive skills for frontend development, backend development, database operations, workflow management, etc. We're talking thousands of lines of best practices, patterns, and examples.

And then... nothing. Claude just wouldn't use them. I'd literally use the exact keywords from the skill descriptions. Nothing. I'd work on files that should trigger the skills. Nothing. It was incredibly frustrating because I could see the potential, but the skills just sat there like expensive decorations.

The "Aha!" Moment

That's when I had the idea of using hooks. If Claude won't automatically use skills, what if I built a system that MAKES it check for relevant skills before doing anything?

So I dove into Claude Code's hook system and built a multi-layered auto-activation architecture with TypeScript hooks. And it actually works!

How It Works

I created two main hooks:

1. UserPromptSubmit Hook (runs BEFORE Claude sees your message):

  • Analyzes your prompt for keywords and intent patterns

  • Checks which skills might be relevant

  • Injects a formatted reminder into Claude's context

  • Now when I ask "how does the layout system work?" Claude sees a big "🎯 SKILL ACTIVATION CHECK - Use project-catalog-developer skill" (project catalog is a large complex data grid based feature on my front end) before even reading my question

2. Stop Event Hook (runs AFTER Claude finishes responding):

  • Analyzes which files were edited

  • Checks for risky patterns (try-catch blocks, database operations, async functions)

  • Displays a gentle self-check reminder

  • "Did you add error handling? Are Prisma operations using the repository pattern?"

  • Non-blocking, just keeps Claude aware without being annoying

skill-rules.json Configuration

I created a central configuration file that defines every skill with:

  • Keywords: Explicit topic matches ("layout", "workflow", "database")

  • Intent patterns: Regex to catch actions ("(create|add).*?(feature|route)")

  • File path triggers: Activates based on what file you're editing

  • Content triggers: Activates if file contains specific patterns (Prisma imports, controllers, etc.)

Example snippet:

{
  "backend-dev-guidelines": {
    "type": "domain",
    "enforcement": "suggest",
    "priority": "high",
    "promptTriggers": {
      "keywords": ["backend", "controller", "service", "API", "endpoint"],
      "intentPatterns": [
        "(create|add).*?(route|endpoint|controller)",
        "(how to|best practice).*?(backend|API)"
      ]
    },
    "fileTriggers": {
      "pathPatterns": ["backend/src/**/*.ts"],
      "contentPatterns": ["router\\.", "export.*Controller"]
    }
  }
}

The Results

Now when I work on backend code, Claude automatically:

  1. Sees the skill suggestion before reading my prompt

  2. Loads the relevant guidelines

  3. Actually follows the patterns consistently

  4. Self-checks at the end via gentle reminders

The difference is night and day. No more inconsistent code. No more "wait, Claude used the old pattern again." No more manually telling it to check the guidelines every single time.

Following Anthropic's Best Practices (The Hard Way)

After getting the auto-activation working, I dove deeper and found Anthropic's official best practices docs. Turns out I was doing it wrong because they recommend keeping the main SKILL.md file under 500 lines and using progressive disclosure with resource files.

Whoops. My frontend-dev-guidelines skill was 1,500+ lines. And I had a couple other skills over 1,000 lines. These monolithic files were defeating the whole purpose of skills (loading only what you need).

So I restructured everything:

  • frontend-dev-guidelines: 398-line main file + 10 resource files

  • backend-dev-guidelines: 304-line main file + 11 resource files

Now Claude loads the lightweight main file initially, and only pulls in detailed resource files when actually needed. Token efficiency improved 40-60% for most queries.

Skills I've Created

Here's my current skill lineup:

Guidelines & Best Practices:

  • backend-dev-guidelines - Routes → Controllers → Services → Repositories

  • frontend-dev-guidelines - React 19, MUI v7, TanStack Query/Router patterns

  • skill-developer - Meta-skill for creating more skills

Domain-Specific:

  • workflow-developer - Complex workflow engine patterns

  • notification-developer - Email/notification system

  • database-verification - Prevent column name errors (this one is a guardrail that actually blocks edits!)

  • project-catalog-developer - DataGrid layout system

All of these automatically activate based on what I'm working on. It's like having a senior dev who actually remembers all the patterns looking over Claude's shoulder.

Why This Matters

Before skills + hooks:

  • Claude would use old patterns even though I documented new ones

  • Had to manually tell Claude to check BEST_PRACTICES.md every time

  • Inconsistent code across the 300k+ LOC codebase

  • Spent too much time fixing Claude's "creative interpretations"

After skills + hooks:

  • Consistent patterns automatically enforced

  • Claude self-corrects before I even see the code

  • Can trust that guidelines are being followed

  • Way less time spent on reviews and fixes

If you're working on a large codebase with established patterns, I cannot recommend this system enough. The initial setup took a couple of days to get right, but it's paid for itself ten times over.

CLAUDE.md and Documentation Evolution

In a post I wrote 6 months ago, I had a section about rules being your best friend, which I still stand by. But my CLAUDE.md file was quickly getting out of hand and was trying to do too much. I also had this massive BEST_PRACTICES.md file (1,400+ lines) that Claude would sometimes read and sometimes completely ignore.

So I took an afternoon with Claude to consolidate and reorganize everything into a new system. Here's what changed:

What Moved to Skills

Previously, BEST_PRACTICES.md contained:

  • TypeScript standards

  • React patterns (hooks, components, suspense)

  • Backend API patterns (routes, controllers, services)

  • Error handling (Sentry integration)

  • Database patterns (Prisma usage)

  • Testing guidelines

  • Performance optimization

All of that is now in skills with the auto-activation hook ensuring Claude actually uses them. No more hoping Claude remembers to check BEST_PRACTICES.md.

What Stayed in CLAUDE.md

Now CLAUDE.md is laser-focused on project-specific info (only ~200 lines):

  • Quick commands (pnpm pm2:start, pnpm build, etc.)

  • Service-specific configuration

  • Task management workflow (dev docs system)

  • Testing authenticated routes

  • Workflow dry-run mode

  • Browser tools configuration

The New Structure

Root CLAUDE.md (100 lines)
├── Critical universal rules
├── Points to repo-specific claude.md files
└── References skills for detailed guidelines

Each Repo's claude.md (50-100 lines)
├── Quick Start section pointing to:
│   ├── PROJECT_KNOWLEDGE.md - Architecture & integration
│   ├── TROUBLESHOOTING.md - Common issues
│   └── Auto-generated API docs
└── Repo-specific quirks and commands

The magic: Skills handle all the "how to write code" guidelines, and CLAUDE.md handles "how this specific project works." Separation of concerns for the win.

Dev Docs System

This system, out of everything (besides skills), I think has made the most impact on the results I'm getting out of CC. Claude is like an extremely confident junior dev with extreme amnesia, losing track of what they're doing easily. This system is aimed at solving those shortcomings.

The dev docs section from my CLAUDE.md:

### Starting Large Tasks

When exiting plan mode with an accepted plan: 1.**Create Task Directory**:
mkdir -p ~/git/project/dev/active/[task-name]/

2.**Create Documents**:

- `[task-name]-plan.md` - The accepted plan
- `[task-name]-context.md` - Key files, decisions
- `[task-name]-tasks.md` - Checklist of work

3.**Update Regularly**: Mark tasks complete immediately

### Continuing Tasks

- Check `/dev/active/` for existing tasks
- Read all three files before proceeding
- Update "Last Updated" timestamps

These are documents that always get created for every feature or large task. Before using this system, I had many times when I all of a sudden realized that Claude had lost the plot and we were no longer implementing what we had planned out 30 minutes earlier because we went off on some tangent for whatever reason.

My Planning Process

My process starts with planning. Planning is king. If you aren't at a minimum using planning mode before asking Claude to implement something, you're gonna have a bad time, mmm'kay. You wouldn't have a builder come to your house and start slapping on an addition without having him draw things up first.

When I start planning a feature, I put it into planning mode, even though I will eventually have Claude write the plan down in a markdown file. I'm not sure putting it into planning mode necessary, but to me, it feels like planning mode gets better results doing the research on your codebase and getting all the correct context to be able to put together a plan.

I created a strategic-plan-architect subagent that's basically a planning beast. It:

  • Gathers context efficiently

  • Analyzes project structure

  • Creates comprehensive structured plans with executive summary, phases, tasks, risks, success metrics, timelines

  • Generates three files automatically: plan, context, and tasks checklist

But I find it really annoying that you can't see the agent's output, and even more annoying is if you say no to the plan, it just kills the agent instead of continuing to plan. So I also created a custom slash command (/dev-docs) with the same prompt to use on the main CC instance.

Once Claude spits out that beautiful plan, I take time to review it thoroughly. This step is really important. Take time to understand it, and you'd be surprised at how often you catch silly mistakes or Claude misunderstanding a very vital part of the request or task.

More often than not, I'll be at 15% context left or less after exiting plan mode. But that's okay because we're going to put everything we need to start fresh into our dev docs. Claude usually likes to just jump in guns blazing, so I immediately slap the ESC key to interrupt and run my /dev-docs slash command. The command takes the approved plan and creates all three files, sometimes doing a bit more research to fill in gaps if there's enough context left.

And once I'm done with that, I'm pretty much set to have Claude fully implement the feature without getting lost or losing track of what it was doing, even through an auto-compaction. I just make sure to remind Claude every once in a while to update the tasks as well as the context file with any relevant context. And once I'm running low on context in the current session, I just run my slash command /update-dev-docs. Claude will note any relevant context (with next steps) as well as mark any completed tasks or add new tasks before I compact the conversation. And all I need to say is "continue" in the new session.

During implementation, depending on the size of the feature or task, I will specifically tell Claude to only implement one or two sections at a time. That way, I'm getting the chance to go in and review the code in between each set of tasks. And periodically, I have a subagent also reviewing the changes so I can catch big mistakes early on. If you aren't having Claude review its own code, then I highly recommend it because it saved me a lot of headaches catching critical errors, missing implementations, inconsistent code, and security flaws.

PM2 Process Management (Backend Debugging Game Changer)

This one's a relatively recent addition, but it's made debugging backend issues so much easier.

The Problem

My project has seven backend microservices running simultaneously. The issue was that Claude didn't have access to view the logs while services were running. I couldn't just ask "what's going wrong with the email service?" - Claude couldn't see the logs without me manually copying and pasting them into chat.

The Intermediate Solution

For a while, I had each service write its output to a timestamped log file using a devLog script. This worked... okay. Claude could read the log files, but it was clunky. Logs weren't real-time, services wouldn't auto-restart on crashes, and managing everything was a pain.

The Real Solution: PM2

Then I discovered PM2, and it was a game changer. I configured all my backend services to run via PM2 with a single command: pnpm pm2:start

What this gives me:

  • Each service runs as a managed process with its own log file

  • Claude can easily read individual service logs in real-time

  • Automatic restarts on crashes

  • Real-time monitoring with pm2 logs

  • Memory/CPU monitoring with pm2 monit

  • Easy service management (pm2 restart email, pm2 stop all, etc.)

PM2 Configuration:

// ecosystem.config.jsmodule.exports = {
  apps: [
    {
      name: 'form-service',
      script: 'npm',
      args: 'start',
      cwd: './form',
      error_file: './form/logs/error.log',
      out_file: './form/logs/out.log',
    },
// ... 6 more services
  ]
};

Before PM2:

Me: "The email service is throwing errors"
Me: [Manually finds and copies logs]
Me: [Pastes into chat]
Claude: "Let me analyze this..."

The debugging workflow now:

Me: "The email service is throwing errors"
Claude: [Runs] pm2 logs email --lines 200
Claude: [Reads the logs] "I see the issue - database connection timeout..."
Claude: [Runs] pm2 restart email
Claude: "Restarted the service, monitoring for errors..."

Night and day difference. Claude can autonomously debug issues now without me being a human log-fetching service.

One caveat: Hot reload doesn't work with PM2, so I still run the frontend separately with pnpm dev. But for backend services that don't need hot reload as often, PM2 is incredible.

Hooks System (#NoMessLeftBehind)

The project I'm working on is multi-root and has about eight different repos in the root project directory. One for the frontend and seven microservices and utilities for the backend. I'm constantly bouncing around making changes in a couple of repos at a time depending on the feature.

And one thing that would annoy me to no end is when Claude forgets to run the build command in whatever repo it's editing to catch errors. And it will just leave a dozen or so TypeScript errors without me catching it. Then a couple of hours later I see Claude running a build script like a good boy and I see the output: "There are several TypeScript errors, but they are unrelated, so we're all good here!"

No, we are not good, Claude.

Hook #1: File Edit Tracker

First, I created a post-tool-use hook that runs after every Edit/Write/MultiEdit operation. It logs:

  • Which files were edited

  • What repo they belong to

  • Timestamps

Initially, I made it run builds immediately after each edit, but that was stupidly inefficient. Claude makes edits that break things all the time before quickly fixing them.

Hook #2: Build Checker

Then I added a Stop hook that runs when Claude finishes responding. It:

  1. Reads the edit logs to find which repos were modified

  2. Runs build scripts on each affected repo

  3. Checks for TypeScript errors

  4. If < 5 errors: Shows them to Claude

  5. If ≥ 5 errors: Recommends launching auto-error-resolver agent

  6. Logs everything for debugging

Since implementing this system, I've not had a single instance where Claude has left errors in the code for me to find later. The hook catches them immediately, and Claude fixes them before moving on.

Hook #3: Prettier Formatter

This one's simple but effective. After Claude finishes responding, automatically format all edited files with Prettier using the appropriate .prettierrc config for that repo.

No more going into to manually edit a file just to have prettier run and produce 20 changes because Claude decided to leave off trailing commas last week when we created that file.

⚠️ Update: I No Longer Recommend This Hook

After publishing, a reader shared detailed data showing that file modifications trigger <system-reminder> notifications that can consume significant context tokens. In their case, Prettier formatting led to 160k tokens consumed in just 3 rounds due to system-reminders showing file diffs.

While the impact varies by project (large files and strict formatting rules are worst-case scenarios), I'm removing this hook from my setup. It's not a big deal to let formatting happen when you manually edit files anyway, and the potential token cost isn't worth the convenience.

If you want automatic formatting, consider running Prettier manually between sessions instead of during Claude conversations.

Hook #4: Error Handling Reminder

This is the gentle philosophy hook I mentioned earlier:

  • Analyzes edited files after Claude finishes

  • Detects risky patterns (try-catch, async operations, database calls, controllers)

  • Shows a gentle reminder if risky code was written

  • Claude self-assesses whether error handling is needed

  • No blocking, no friction, just awareness

Example output:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 ERROR HANDLING SELF-CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

⚠️  Backend Changes Detected
   2 file(s) edited

   ❓ Did you add Sentry.captureException() in catch blocks?
   ❓ Are Prisma operations wrapped in error handling?

   💡 Backend Best Practice:
      - All errors should be captured to Sentry
      - Controllers should extend BaseController
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

The Complete Hook Pipeline

Here's what happens on every Claude response now:

Claude finishes responding
  ↓
Hook 1: Prettier formatter runs → All edited files auto-formatted
  ↓
Hook 2: Build checker runs → TypeScript errors caught immediately
  ↓
Hook 3: Error reminder runs → Gentle self-check for error handling
  ↓
If errors found → Claude sees them and fixes
  ↓
If too many errors → Auto-error-resolver agent recommended
  ↓
Result: Clean, formatted, error-free code

And the UserPromptSubmit hook ensures Claude loads relevant skills BEFORE even starting work.

No mess left behind. It's beautiful.

Scripts Attached to Skills

One really cool pattern I picked up from Anthropic's official skill examples on GitHub: attach utility scripts to skills.

For example, my backend-dev-guidelines skill has a section about testing authenticated routes. Instead of just explaining how authentication works, the skill references an actual script:

### Testing Authenticated Routes

Use the provided test-auth-route.js script:


`node scripts/test-auth-route.js http://localhost:3002/api/endpoint`

The script handles all the complex authentication steps for you:

  1. Gets a refresh token from Keycloak

  2. Signs the token with JWT secret

  3. Creates cookie header

  4. Makes authenticated request

When Claude needs to test a route, it knows exactly what script to use and how to use it. No more "let me create a test script" and reinventing the wheel every time.

I'm planning to expand this pattern - attach more utility scripts to relevant skills so Claude has ready-to-use tools instead of generating them from scratch.

Tools and Other Things

SuperWhisper on Mac

Voice-to-text for prompting when my hands are tired from typing. Works surprisingly well, and Claude understands my rambling voice-to-text surprisingly well.

Memory MCP

I use this less over time now that skills handle most of the "remembering patterns" work. But it's still useful for tracking project-specific decisions and architectural choices that don't belong in skills.

BetterTouchTool

  • Relative URL copy from Cursor (for sharing code references)

    • I have VSCode open to more easily find the files I’m looking for and I can double tap CAPS-LOCK, then BTT inputs the shortcut to copy relative URL, transforms the clipboard contents by prepending an ‘@’ symbol, focuses the terminal, and then pastes the file path. All in one.

  • Double-tap hotkeys to quickly focus apps (CMD+CMD = Claude Code, OPT+OPT = Browser)

  • Custom gestures for common actions

Honestly, the time savings on just not fumbling between apps is worth the BTT purchase alone.

Scripts for Everything

If there's any annoying tedious task, chances are there's a script for that:

  • Command-line tool to generate mock test data. Before using Claude code, it was extremely annoying to generate mock data because I would have to make a submission to a form that had about a 120 questions Just to generate one single test submission.

  • Authentication testing scripts (get tokens, test routes)

  • Database resetting and seeding

  • Schema diff checker before migrations

  • Automated backup and restore for dev database

Pro tip: When Claude helps you write a useful script, immediately document it in CLAUDE.md or attach it to a relevant skill. Future you will thank past you.

Documentation (Still Important, But Evolved)

I think next to planning, documentation is almost just as important. I document everything as I go in addition to the dev docs that are created for each task or feature. From system architecture to data flow diagrams to actual developer docs and APIs, just to name a few.

But here's what changed: Documentation now works WITH skills, not instead of them.

Skills contain: Reusable patterns, best practices, how-to guides Documentation contains: System architecture, data flows, API references, integration points

For example:

  • "How to create a controller" → backend-dev-guidelines skill

  • "How our workflow engine works" → Architecture documentation

  • "How to write React components" → frontend-dev-guidelines skill

  • "How notifications flow through the system" → Data flow diagram + notification skill

I still have a LOT of docs (850+ markdown files), but now they're laser-focused on project-specific architecture rather than repeating general best practices that are better served by skills.

You don't necessarily have to go that crazy, but I highly recommend setting up multiple levels of documentation. Ones for broad architectural overview of specific services, wherein you'll include paths to other documentation that goes into more specifics of different parts of the architecture. It will make a major difference on Claude's ability to easily navigate your codebase.

Prompt Tips

When you're writing out your prompt, you should try to be as specific as possible about what you are wanting as a result. Once again, you wouldn't ask a builder to come out and build you a new bathroom without at least discussing plans, right?

"You're absolutely right! Shag carpet probably is not the best idea to have in a bathroom."

Sometimes you might not know the specifics, and that's okay. If you don't ask questions, tell Claude to research and come back with several potential solutions. You could even use a specialized subagent or use any other AI chat interface to do your research. The world is your oyster. I promise you this will pay dividends because you will be able to look at the plan that Claude has produced and have a better idea if it's good, bad, or needs adjustments. Otherwise, you're just flying blind, pure vibe-coding. Then you're gonna end up in a situation where you don't even know what context to include because you don't know what files are related to the thing you're trying to fix.

Try not to lead in your prompts if you want honest, unbiased feedback. If you're unsure about something Claude did, ask about it in a neutral way instead of saying, "Is this good or bad?" Claude tends to tell you what it thinks you want to hear, so leading questions can skew the response. It's better to just describe the situation and ask for thoughts or alternatives. That way, you'll get a more balanced answer.

Agents, Hooks, and Slash Commands (The Holy Trinity)

Agents

I've built a small army of specialized agents:

Quality Control:

  • code-architecture-reviewer - Reviews code for best practices adherence

  • build-error-resolver - Systematically fixes TypeScript errors

  • refactor-planner - Creates comprehensive refactoring plans

Testing & Debugging:

  • auth-route-tester - Tests backend routes with authentication

  • auth-route-debugger - Debugs 401/403 errors and route issues

  • frontend-error-fixer - Diagnoses and fixes frontend errors

Planning & Strategy:

  • strategic-plan-architect - Creates detailed implementation plans

  • plan-reviewer - Reviews plans before implementation

  • documentation-architect - Creates/updates documentation

Specialized:

  • frontend-ux-designer - Fixes styling and UX issues

  • web-research-specialist - Researches issues along with many other things on the web

  • reactour-walkthrough-designer - Creates UI tours

The key with agents is to give them very specific roles and clear instructions on what to return. I learned this the hard way after creating agents that would go off and do who-knows-what and come back with "I fixed it!" without telling me what they fixed.

Hooks (Covered Above)

The hook system is honestly what ties everything together. Without hooks:

  • Skills sit unused

  • Errors slip through

  • Code is inconsistently formatted

  • No automatic quality checks

With hooks:

  • Skills auto-activate

  • Zero errors left behind

  • Automatic formatting

  • Quality awareness built-in

Slash Commands

I have quite a few custom slash commands, but these are the ones I use most:

Planning & Docs:

  • /dev-docs - Create comprehensive strategic plan

  • /dev-docs-update - Update dev docs before compaction

  • /create-dev-docs - Convert approved plan to dev doc files

Quality & Review:

  • /code-review - Architectural code review

  • /build-and-fix - Run builds and fix all errors

Testing:

  • /route-research-for-testing - Find affected routes and launch tests

  • /test-route - Test specific authenticated routes

The beauty of slash commands is they expand into full prompts, so you can pack a ton of context and instructions into a simple command. Way better than typing out the same instructions every time.

Conclusion

After six months of hardcore use, here's what I've learned:

The Essentials:

  1. Plan everything - Use planning mode or strategic-plan-architect

  2. Skills + Hooks - Auto-activation is the only way skills actually work reliably

  3. Dev docs system - Prevents Claude from losing the plot

  4. Code reviews - Have Claude review its own work

  5. PM2 for backend - Makes debugging actually bearable

The Nice-to-Haves:

  • Specialized agents for common tasks

  • Slash commands for repeated workflows

  • Comprehensive documentation

  • Utility scripts attached to skills

  • Memory MCP for decisions

And that's about all I can think of for now. Like I said, I'm just some guy, and I would love to hear tips and tricks from everybody else, as well as any criticisms. Because I'm always up for improving upon my workflow. I honestly just wanted to share what's working for me with other people since I don't really have anybody else to share this with IRL (my team is very small, and they are all very slow getting on the AI train).

If you made it this far, thanks for taking the time to read. If you have questions about any of this stuff or want more details on implementation, happy to share. The hooks and skills system especially took some trial and error to get right, but now that it's working, I can't imagine going back.

TL;DR: Built an auto-activation system for Claude Code skills using TypeScript hooks, created a dev docs workflow to prevent context loss, and implemented PM2 + automated error checking. Result: Solo rewrote 300k LOC in 6 months with consistent quality.

🌐
Reddit
reddit.com › r/claudeai › claude code tips i wish i knew as a beginner
r/ClaudeAI on Reddit: Claude Code Tips I Wish I Knew as a Beginner
May 17, 2025 -

I’ve been using Claude Code for the past month, and honestly, I almost gave up on it early on. Felt like it was all hype and no real edge over the other AI coding tools.

But after pushing through and exploring more of what it offers, it actually became super useful especially once I started using the features properly.

Here are 10 things that significantly improved my experience (and productivity):

  1. Use it inside VSCode or Cursor – Sounds basic, but it makes everything feel more visual, and also gives you a much clearer view of your files and folders while working.

  2. Plan Mode – It auto-creates a plan from your vague prompt. Great for breaking down tasks before coding. Just press Shift + Tab twice.

  3. Context 7 MCP Server – Claude sometimes suggests outdated APIs. Context7 feeds it updated docs so you get more accurate responses.

  4. Custom slash commands – If you find yourself typing the same prompt again and again, just save it as a custom command and reuse it.

  5. Clear & compact – Claude starts losing the context when the chat gets too long. Use /clear for resetting the context window and /compact to keep the important stuff in an ongoing conversation.

  6. Switching models – I use Opus when I need serious thinking, and Sonnet for smaller tasks. Use /model to toggle between them.

  7. YOLO mode – This runs your command without previewing. Risky but super fast once you trust it. Use the bash command: claude --dangerously-skip-permissions

  8. Working with images – You can give Claude Code screenshots, mocks, charts and it handles them surprisingly well. Just drag, paste, or provide the path.

  9. Edit previous prompts – No need to start over if you made a typo or missed context. Double-tap Escape to bring up past prompts and tweak them. Just know it wipes everything that came after.

  10. Voice input – Some days, I just don’t feel like typing. I use Wispr Flow - press fn, speak your prompt and done.

I’ve started relying on Claude more for pair programming and planning stuff out and when I combine these features right, it's genuinely helpful.

Would love to hear if anyone else has any other workflows or hacks I might’ve missed!

🌐
Anthropic
anthropic.com › engineering › claude-code-best-practices
Claude Code: Best practices for agentic coding
While auto-accept mode (shift+tab to toggle) lets Claude work autonomously, you'll typically get better results by being an active collaborator and guiding Claude's approach. You can get the best results by thoroughly explaining the task to Claude at the beginning, but you can also course correct ...
🌐
Reddit
reddit.com › r/claudeai › how i use claude code
r/ClaudeAI on Reddit: How I use Claude Code
June 25, 2025 -

Hey r/ClaudeAI! This is a cross-post from my blog. I'm sharing what I've learned about Claude Code here & hopefully you find it useful :)

I've been a huge fan of Claude Code ever since it was released.

The first time I tried it, I was amazed by how good it was. But the token costs quickly turned me away. I couldn't justify those exorbitant costs at the time.

Since Anthropic enabled using Claude.ai subscriptions to power your Claude Code usage, it has been a no-brainer for me. I quickly bought the Max tier to power my usage.

Since then, I've used Claude Code extensively. I'm constantly running multiple CC instances doing some form of coding or task that is useful to me. This would have cost me many thousands of dollars if I had to pay for the usage. My productivity has noticeably improved since starting this, and it has been increasing steadily as I become better at using these agentic coding tools.

From throwaway projects...

Agentic coding gives the obvious benefit of taking on throwaway projects that you'd like to explore for fun. Just yesterday, I downloaded all my medical records from the Danish health systems and formatted them so an LLM would easily understand them. Then I gave it to OpenAI's o3 model to help me better understand my (somewhat atypical) medical history. This required barely 15 minutes of my time to set up and guide, and the result was fantastic. I finally got answers to questions I'd been wondering about for years.

There are countless instances where CC has helped me do things that are useful, but not critical enough to be prioritized in the day-to-day.

To serious development

What I'm most interested in is how I can use tools like Claude Code to increase my leverage and create better, more useful solutions. While side projects are fun, they are not the most important thing to optimize. Serious projects (usually) have existing codebases and quality standards to uphold.

I've had great experience using Claude Code, AmpCode, and other AI-coding tools for these kinds of projects, but the patterns of coding are different:

  • Context curation is critical: You have to include established experience and directional cues beyond task specifications.

  • You guide the architecture: The onus is on you to provide and guide the model to create designs that fit well in the context of your system. This means more hand-holding and creating explicit plans for the agentic tools to execute.

  • Less vibe-coding, more partnership: It's more like an intellectual sparring partner that eagerly does trivial tasks for you, is somehow insanely capable in some areas, can read and understand hundreds of documentation pages in minutes, but doesn't quite understand your system or project without guidance.

Patterns and tips for agentic coding

Much of this advice can be boiled down to:

  • Get good at using the tool you're using

  • Build and maintain tools and frameworks that help you use these agentic coding tools better. Use the agentic tools to write these

Your skills and productivity gains from agentic coding tools will improve exponentially over time.

Here's my attempt at boiling down some of the most useful patterns and tips I've learned using Claude Code extensively.

1. Establish and maintain a CLAUDE.md file

This can feel like a chore but it's insanely useful and can save you a ton of time.

Use # as the prefix to your CC prompt and it'll remember your instructions by adding them to CLAUDE.md.

Put CLAUDE.md files in subdirectories to give specific instructions for tests, frontend code, backend services, etc. Curate your context!

Your investment in curating files like CLAUDE.md, or procedures as in (7) and scripts (11), is the same as investing in your developer tooling. Would you code without a linter or formatter? Without a language server to correct you and give feedback? Or a type checker? You could, but most would agree that it's not as easy, nor productive.

2. Use the commands

A few useful ones:

  • Plan mode (shift+tab). I find that this increases the reliability of CC. It becomes more capable of seeing a task to completion.

  • Verbose mode (CTRL+R) to see the full context Claude is seeing

  • Bash mode (! prefix) to run a command and add output as context for the next turn

  • Escape to interrupt and double escape to jump back in the conversation history

3. Run multiple instances in parallel

Frontend + backend at the same time is a great approach. Have one instance build the frontend with placeholder/mocked API & iterate on design while another agent codes the backend.

You can use Git worktrees to work on the same codebase with multiple agents. It's honestly more of a pain than gain when you have to spin up multiple Docker Compose environments, so just use a single Claude instance in that kind of project. Or just don't have multiple instances of the project running at the same time.

4. Use subagents

Just ask Claude Code to do so.

A common and useful pattern is to use multiple subagents to approach a problem from multiple angles simultaneously, then have the main agent compare notes and find the best solution with you.

5. Use visuals

Use screenshots (just drag them in). Claude Code is excellent at understanding visual information and can help debug UI issues or replicate designs.

6. Choose Claude 4 Opus

Especially if you're on a higher tier. Why not use the best model available?

Anecdotally, it's a noticeable step up from Claude 4 Sonnet – which is already a good model in itself.

7. Create project-specific slash commands

Put them in .claude/commands.

Examples:

  • Common tasks or instructions

  • Creating migrations

  • Project setup

  • Loading context/instructions

  • Tasks that need repetition with different focus each time

@tokenbender wrote a great guide to their agent-guides setup that shows this practice.

8. Use Extended Thinking

Write think, think harder, or ultrathink for cases requiring more consideration, like debugging, planning, design.

These increase the thinking budget, which gives better results (but takes longer). ultrathink supposedly allocates 31,999 tokens.

9. Document everything

Have Claude Code write its thoughts, current task specifications, designs, requirement specifications, etc. to an intermediate markdown document. This both serves as context later and a scratchpad for now. And it'll be easier for you to verify and help guide the coding process.

Using these documents in later sessions is invaluable. As your sessions grow in length, context is lost. Regain important context by just reading the document again.

10. For the Vibe-Coders

USE GIT. USE IT OFTEN. You can just make Claude write your commit messages. But seriously, version control becomes even more critical when you're moving fast with AI assistance.

11. Optimize your workflow

  • Continue previous sessions to preserve context (use --resume)

  • Use MCP servers (context7, deepwiki, puppeteer, or build your own)

  • Write scripts for common deterministic tasks and have CC maintain them

  • Use the GitHub CLI instead of fetch tools for GitHub context. Don't use fetch tools to retrieve context from GitHub. (Or use an MCP server, but the CLI is better).

  • Track your usage with ccusage

    • It's more of a fun gimmick if you're on Pro/Max tier – you'll just see what you 'could have' spent if you were using the API.

    • But the live dashboard (bunx ccusage blocks --live) is useful to see if your multiple agents are coming close to hitting your rate limits.

  • Stay up to date via the docs – they're super good

12. Aim for fast feedback loops

Provide a verification mechanism for the model to achieve a fast feedback loop. This usually leads to less reward-hacking, especially when paired with specific instructions and constraints.

Reward hacking: when the AI takes shortcuts to make it look like it succeeded without actually solving the problem. For example, it might hardcode fake outputs or write tests that always pass instead of doing the real work.

13. Use Claude Code in your IDE

The experience becomes more akin to pair-programming, and it gives CC the ability to interact with IDE tools, which is very useful. E.g. access to lint errors, your active file, etc.

14. Queue messages

You can keep sending messages while Claude Code is working, which queues them for the next turn. Useful when you already know what's next.

There's currently a bug where CC doesn't always see this message, but it usually works. Just be aware of it.

15. Compacting and session context length

Be very mindful of compacting. It reduces the noise in your conversation, but also leads to compacting away important context. Do it preemptively at natural stopping points, as compression leads to information loss.

16. Get a better PR template

This is more of a personal gripe with the template itself.

Use another PR template than the default. It seems like Claude 4/CC was instructed to use a specific template, but that template sucks. "Summary → Changes → Test plan" is OK but it's better to have a PR body tailored to your exact PR or project.

Beyond Coding

Claude Code can be used for more than just code.

  • Researching docs → writeup (e.g. to use for another sessions context)

  • Debugging (it's really good at this!)

  • Writing docs after completing features

  • Refactoring

  • Writing tests

  • Finding where X is done (e.g. in new codebases, or huge codebases you're unfamiliar with).

  • Using Claude Code in my Obsidian vault for extensive research into my notes (journals, thoughts, ideas, notes, ...)

Things to watch out for

Security when using tools

Be VERY careful about the external context you inject into the model, e.g. by fetching via MCPs or other means. Prompt injection is a real security concern. People can write malicious prompts in e.g. GitHub issues and have your agent leak unintended information or take unprecedented actions.

Vibing

I've still yet to see a case where full-on, automated vibe-coding for hours on end makes sense. Yes, it works, and you can do it, but I'd avoid it in production systems where people actively have to maintain code. Or, at least review the code yourself.

Model variability

Sometimes it feels like Anthropic is using quantized models depending on model demand. It's as if the model quality can vary over time. This could be a skill issue, but I've seen other users report similar experiences. While understandable, it doesn't feel great as a paying user.

Running Claude Code

I can't help but tinker and explore the tools I use, and I've found some interesting configurations to use with Claude Code.

Some of the environment variables I'm using aren't publicly documented yet, so this is your warning that they may be unstable.

Here's a bash function I use to launch Claude Code with optimized settings:

function ccv() {
  local env_vars=(
    "ENABLE_BACKGROUND_TASKS=true"
    "FORCE_AUTO_BACKGROUND_TASKS=true"
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=true"
    "CLAUDE_CODE_ENABLE_UNIFIED_READ_TOOL=true"
  )
  
  local claude_args=()
  
  if [[ "$1" == "-y" ]]; then
    claude_args+=("--dangerously-skip-permissions")
  elif [[ "$1" == "-r" ]]; then
    claude_args+=("--resume")
  elif [[ "$1" == "-ry" ]] || [[ "$1" == "-yr" ]]; then
    claude_args+=("--resume" "--dangerously-skip-permissions")
  fi
  
  env "${env_vars[@]}" claude "${claude_args[@]}"
}
  • CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=true: Disables telemetry, error reporting, and auto-updates

  • ENABLE_BACKGROUND_TASKS=true: Enables background task functionality for long-running commands

  • FORCE_AUTO_BACKGROUND_TASKS=true: Automatically sends long tasks to background without needing to confirm

  • CLAUDE_CODE_ENABLE_UNIFIED_READ_TOOL=true: Unifies file reading capabilities, including Jupyter notebooks.

This gives you:

  • Automatic background handling for long tasks (e.g. your dev server)

  • No telemetry or unnecessary network traffic

  • Unified file reading

  • Easy switches for common scenarios (-y for auto-approve, -r for resume)

Top answer
1 of 18
9
Thank you! This is the type of thinking and practices we need with these tools. I hate that the term Vibe Coding is such a blanket statement. Posts like this preach best practices while encouraging to try new things. I always feel like we have 2 teams always arguing. Old industry veteran dev who refuses change: “VIBE CODER!! You don’t know anything, your code sucks, and you need to write all your own code yourself” Or the I’m a founder now guy: “I launched my startup company and my new revolutionary app in 3 days. All vibe coded, works perfect!” There exists a place in the middle that is a good spot to be. Learn the language, learn the best practices, learn SDLC and DevOps. Version Control is still an absolute must. Linting, prettier, SAST and DAST tools within your pipeline. Testing everything, reviewing diffs, not sending secrets around like they’re shared candy. And the list goes on. But, with these agents and tools, all of these things are so much simpler to implement, practice, and understand! Like you mentioned, pure vibe coding definitely has its place for fun or prototyping, but once something becomes a real project that you intend to share or have others use, you really should start implementing these best practices. Learning Claude Code and Gemini CLI is similar to learning the tools of GitHub or your chosen IDE. They are still tools that can be used really well or just blasted with simple prompts and not a care in the world! Software development is changing whether people like it or not and we’re still not sure what this looks like in 5-10 years. Now is a perfect time to grasp the fundamentals and learn these tools inside and out.
2 of 18
2
Really good post, got intrigued by the last part, will look this up further, what has been you experience with those experimental flags?
🌐
Reddit
reddit.com › r/claudeai › ultimate claude code setup
r/ClaudeAI on Reddit: Ultimate Claude Code Setup
May 27, 2025 -

Claude Code has been running flawlessly for me by literally telling it to come up with a plan to make a change.

For example: "Think of a way to create a custom contact page for this website. Think of any potential roadblocks and or errors that may occur".

Then, I just take that output and paste it into Gemini, and tell it "Here is my plan to create a custom contact page for my website: [plan would go here]" (If you want to make it even better give it access to your code). Tell it to critique and make changes to this plan. Then you just feed the critiques back into Claude code and they go back and forth for a while until they both settle on a plan that sounds good.

Now you just tell Claude code "Implement the plan, make sure to check for errors as you go" and I have done this about 13 times and it has built and deployed, no extra debugging.

🌐
Reddit
reddit.com › r/claudeai › my claude code tips for newer users
r/ClaudeAI on Reddit: My Claude Code tips for newer users
August 13, 2025 -

I recently wrote up some notes for a friend that was just getting started on Claude Code, so I decided to clean them up and post them here in case anyone takes away a few useful tidbits for themselves. There are a lot of these "Claude tips & tricks" style posts, but I enjoy reading them every time I see another one.

Fyi on me: professional software engineer for 29 years, plus another dozen or so years before that growing up coding in front of the beautiful glow of a computer screen. My AI coding has evolved starting last fall from ChatGPT copy/paste -> Github Copilot in Jetbrains -> Cursor -> Github Copilot in VS Code -> Claude Code CLI. I'm loving that most recent stop on the AI train.

Planning & Kicking off work

  • Make use of the planning mode (shift-tab twice). This is a great way to get Claude to really think things through and design out the work before it dives in. You can go back and forth with it on options (e.g. ask it for a bunch of options with pros and cons on each if you’ll be using some new tech you haven’t used before), or to make changes to design choices you don’t like, and then you work with Claude to sculpt the plan into exactly what you want before it starts. I’ve seen a much higher chance of success on the build results vs if you just gave Claude the initial prompt in regular mode where it starts coding right away. Logically, this makes sense to me. It’s the same as if you were doing that with an engineer on your team. You don’t want that engineer to run off and start coding before they’ve fully formed and vetted their plan.

  • Detailed spec to kick things off when doing bigger or delicate tasks: I find I am writing longer and longer specs the more I work with Claude, often a page or two long, with my longest being one I spent an hour writing out before feeding it to Claude. It’s rare that I will start a new task with only a few sentences.

  • Markdown format for your specs / prompts: when doing a bigger spec or something that I want to get just right, I write it in the Markdown format. It’s really nice and simple. You don’t have to overthink the format at all. #, ##, ### for organizing your thoughts into logical sections and subsections, and “-“ for lists of items. It’s pretty natural to write in Markdown, and I’ve adopted it for my own personal todo list files. I feel more confidence in prompts where I’ve organized it with Markdown because there is less ambiguity with my instructions.

  • I like to ask Claude to write its final plan (from planning mode) along with a detailed todo list into a Markdown plan document when I’m at the “I approve this plan, go ahead and implement” stage before it leaves planning mode. It’s also an opportunity to clearly state which phases/tasks from the plan you want Claude to implement until it stops. Otherwise sometimes it just does Phase 1 of the work and then waits for you to tell it to continue with Phase 2, 3, and beyond. The Markdown doc is also useful because sometimes if Claude is doing a very large build for you, it might eventually forget some of the items from the original plan that it was supposed to do, especially if you need to iterate with Claude halfway through its work in order to get one of the earlier Phases right. You can point Claude back at the Markdown doc later to remind it of the missing steps. I also typically ask Claude to update the Markdown doc as it works to update the status of which steps it completed, but I find it’s a mixed bag getting Claude to do a thorough job of updating it as it goes. But you can prompt Claude later to update the doc to reflect the state of the implemented work.

  • Mockup: a very cool thing a friend did that I'll start using was to ask Claude Code to generate a mockup of the planned UI or interface (even an interactive mockup if you want). It generates the mockup much quicker than a full build, and it gives you a chance to provide Claude feedback, make changes, and then use the mockup as part of its spec for the real build.

Configurations

  • CLAUDE.md is handy for instructions you want Claude to (nearly) always keep in mind (I say nearly, because it seems like the CLAUDE.md info slides out of context at times, unfortunately). I give it context about the overall project. I include some of the key parts of my initial spec and/or what the overall objective is with the project. I tell it the command line build and launching instructions (I’m typically having Claude write a nice clean / build / launch / deploy script for me, which I then have Claude document in the CLAUDE.md file). I tell it where the log files can be found. Over time, my CLAUDE.md gets extended with “Always do this…” or “Never do this…” comments from me as well.

  • I've found it useful to add instructions to my CLAUDE.md to help Claude keep things tidy, such as: "When Claude generates .md markdown files, it should place them in docs/ai/". Otherwise you can end up with docs sprinkled all over your project. The same goes for test scripts and other debug one-offs Claude will create.

  • .claude/commands/ is very helpful. You can write prompt-like instructions into markdown files in this directory in your project, and then it automatically creates each as a new “/“ command in your project’s Claude sessions. Example commands I use: /commit (.claude/commands/commit.md) to git add my unstaged files, draft a commit message and git commit everything. Or /bl (.claude/commands/bl.md) with instructions to do a clean build, clear the existing output log files, launch my app, and then check the results of the log file output.

  • .claude/settings.json is handy for allow-listing various commands (e.g. grep) that you want Claude to always do without asking for your approval.

  • Project vs Global commands: you can create those .claude/commands/ in your project's .claude directory, or you can create commands that will work across all of your Claude projects by putting them in your home directory's ~/.claude/commands directory.

  • /statusline: this recently added feature is quite nice. Type /statusline and Claude will help you configure what appears under your Claude Code CLI prompt box. I have mine set to show me my project name (based on the project's root working directory name), my git status along with how many files are currently modified, and the current model Claude is using, all with some nice colors and dividers. The /statusline can show you quite a bit more than that if you like. The code for the statusline ends up in ~/.claude/statusline.sh.

Prompting / Models

  • “Think” prompt keyword: Claude has some special behavior for spending more time (and tokens if you worry about that) thinking and researching and coming up with a plan or solution. "think" < "think hard" < "think harder" < "ultrathink." These are useful, I like using them. I ramp up the thinking level depending on the complexity of the task we’re planning together.

  • Opus vs Sonnet: by default Claude will start your day using Opus for everything. You can change the /model but I leave it be. If you’ve been cranking away hard on lots of parallel sessions then you’ll see Claude switch to Sonnet partway through your day for usage limit reasons. I do feel like Opus does a better job with big coding jobs, so it’s a bit of a bummer when it switches to Sonnet. But I carry on and keep working. Nothing to be done about it since I’m already on the 20x plan. Anthropic recently added this /model option: “Opus Plan Mode: Use Opus 4.1 in plan mode, Sonnet 4 otherwise” which I haven’t tried out but seems like it will be a nice balance and preserve Opus for your planning throughout the day.

  • Auto-compacting: I see a lot of fear of the auto-compact. This is when the context fills up and Claude needs to summarize the conversation and pending tasks so it can flush its context and have room to continue. It generally does a decent job, but I have seen Claude lose some sense of what it needed to do or how things worked after the auto-compact. Coding typically goes smoother if I don’t run into auto-compaction partway through a task. My approach is that after I’ve finished a major task and it’s all debugged and committed, I /clear to completely wipe the context in order to reduce the odds that my next large task will hit auto-compaction halfway through. If I’m doing lots of small tasks with Claude then I don’t worry about using /clear and instead I’m happy to keep a long running Claude session going, but I’ll trigger the /compact myself if I happen to notice I’m down to a low single-digit % remaining until auto-compaction. That said, it’s really not all that often I tripped up by auto-compaction if Claude needs to do it during a task and so I don’t stress over this too much.

Claude control

  • Esc: you can pause Claude at any time by hitting Escape. Useful if you don’t like what it’s doing or need to change anything.

  • Esc Esc: you won’t need to use this often but it’s good to know about it. I’ve used this when Claude has errored out or complained that I’ve prompted it to do something it is not allowed to do (I guess it thought I was asking for something nefarious…video game coding can have dangerous sounding trigger words). Pressing Escape twice brings up a stack of your previous prompts, and you can step back to a previous one and do a fresh prompt from that point forward.

  • Feel free to type in something for Claude while it’s in the middle of working. If you see Claude make a wrong assumption, or you want to add a task to the current work, or want to give it more debug info, you are welcome to enter it in and Claude will pick up your additional prompt shortly after and incorporate it. Very handy.

  • @: Claude is really good at finding your code files when you prompt in simple English, but I like using the @ to reference specific files in the project tree. When I want to mention a class name, I’m more likely to @ and type the class file, for two reasons: 1) it has nice tab auto-completion, 2) it makes the request very explicitly clear for Claude.

  • Ctrl-R: I don’t use this much, but it’s interesting. Hit Ctrl-R while Claude is working and you’ll see a much more verbose output of what it is doing.

  • claude -r: super useful command line option when launching Claude. This lets you rejoin an earlier Claude session. This is very handy if you rebooted or Claude crashed, or if you want to ask Claude to do a task in some earlier session where it built up very useful context on a particular topic.

Workflow

  • Git: I regularly use Claude to do my git add/commits when I’m ready to submit to my repo. It writes a nicer git commit message than I would. I don’t auto-approve Claude for doing git commands, though. I manually trigger this when I’m ready, after finishing my testing and reviewing the diffs in VS Code. Claude likes to start doing git adds and commits after every change , which is annoying because it wants to commit things I haven’t even tested yet. I’ve added instructions to my CLAUDE.md files to tell Claude not to do that, but sometimes it gets excited and wants to get back into that mode again.

  • Screenshots & logs. Drag in an image or Ctrl-V to paste an image from your clipboard. It’s always good to give Claude more actionable data to help it fix a problem. Screenshots aren’t foolproof though. There are times Claude says “the screenshot shows it has been fixed” when clearly the screenshot showed nothing of the sort. And for logs: yeah you can copy & paste output into Claude, but it’s even nicer when you let Claude know where the log files are and you tell it to check the logs itself. Let Claude add your log directories external to your project directory when it asks; this way it can check the log files itself without asking you each time.

  • Feedback loops: this is a dream setup, but it can be hard to achieve. It’s worth working with Claude on scripts and tools to do this, though. When you are trying to get Claude to fix something tricky or its having a hard time with a fix, try to get Claude set up with everything it needs to build and launch your app with the right config and command line settings and access to the output logs (or giving Claude access to an MCP tool that can take screenshots for it, like Peekaboo MCP on the Mac). And then tell Claude to do all of those things in a loop forever until it solves the problem. Claude will loop those steps, adding debug info, trying again, inspecting the output, and then repeating the cycle until it gets it. Or blows up trying. Another approach is to tell Claude to be a debug log class/helper, and direct Claude to write its debug output here and create a feedback loop with this, in order to keep Claude’s debugging a bit more partitioned from your console, logs or other outputs.

  • Cleanups: Claude can create a lot of one-off scripts and debug logging during its debugging phases, which is all perfectly fine. I've found Claude can clean all of these up quite nicely. This is another chance for a handy .claude/command where you tell Claude how you want the project tidied up prior to committing to your repo.

  • git restore: I find it important to commit my working and tested changes to git before embarking on another Claude prompt. This is good for all the normal good coding practices. But it's especially good when working with AI because I find that if I'm struggling to get good results from a new Claude prompt - and follow-up prompts are not steering Claude in the right direction - that it's often best to git restore back to the last commit and try again. Mentally, it can be tough to pull the trigger and give up on a prompt’s results because with Claude you always feel like you are one prompt away from getting the result you want. But things can get messy if you are stuck prompting Claude over and over and it’s still not delivering the result you want, and I've gotten great results from git-restoring and refining my initial prompt based on how I saw things go sideways during the last attempt. This is also a good time to /clear to wipe the bad paths from Claude’s context before you try again fresh with an improved prompt.

Handy links

  • Anthropics's Common workflows is filled with great tips: https://docs.anthropic.com/en/docs/claude-code/common-workflows

  • An easy way to check your Claude Code usage: npm install -g ccusage ( https://github.com/ryoppippi/ccusage )

🌐
Sid Bharath
siddharthbharath.com › home › blog › cooking with claude code: the complete guide
Cooking with Claude Code: The Complete Guide - Sid Bharath
July 8, 2025 - You’re right now on what’s called the Default Mode. You tell Claude to do something, it suggests a change, waits for your permission, and then executes. With the new output styles feature, you can even ask Claude to explain why it’s doing ...
🌐
Reddit
reddit.com › r/claudeai › claude code update v1.0.54 - windows mode switching fixed and more!
r/ClaudeAI on Reddit: Claude Code update v1.0.54 - Windows mode switching fixed and more!
July 1, 2025 -

Version 1.0.52:

  • Added support for MCP server instructions

Version 1.0.53:

  • Updated @-mention file truncation from 100 lines to 2000 lines

  • Add helper script settings for AWS token refresh: awsAuthRefresh (for foreground operations like aws sso login) and awsCredentialExport (for background operation with STS like response).

Version 1.0.54:

  • Hooks: Added UserPromptSubmit hook and the current working directory to hook inputs

  • Custom slash commands: Added argument-hint to frontmatter

  • Windows: OAuth uses port 45454 and properly constructs browser URL

  • Windows: mode switching now uses alt + m, and plan mode renders properly

  • Shell: Switch to in-memory shell snapshot to file-related errors

I am particularly excited bout the @-mention buff and the new hook!

https://claudelog.com/claude-code-changelog/

🌐
Reddit
reddit.com › r/claudeai › made a tool to run claude code with other models (including free ones)
r/ClaudeAI on Reddit: Made a tool to run Claude Code with other models (including free ones)
1 month ago -

Got tired of being locked to Anthropic models in Claude Code. Built a proxy that lets you use 580+ models via OpenRouter while keeping the full Claude Code experience.

What it does:

  • Use Gemini, GPT, Grok, DeepSeek, Llama — whatever — inside Claude Code

  • Works with your existing Claude subscription (native passthrough, no markup)

  • Or run completely free using OpenRouter's free tier (actual good models, not garbage)

  • Multi-agent setup: map different models to opus/sonnet/haiku/subagent roles

Install:

npm install -g claudish
claudish --free

That's it. No config.

How it works:

Sits between Claude Code and the API. Translates Anthropic's tool format to OpenAI/Gemini JSON and back. Zero patches to the Claude Code binary, so it doesn't break when Anthropic pushes updates.

Everything still works — thinking modes, MCP servers, /commands, the lot.

Links:

  • Site: https://claudish.com

  • GitHub: https://github.com/MadAppGang/claude-code/tree/main/mcp/claudish

Open source, MIT license. Built by MadAppGang.

What models are people wanting to try with Claude Code's architecture? Curious what combos work well.

🌐
Builder.io
builder.io › blog › claude-code
How I use Claude Code (+ my best tips)
September 29, 2025 - There's a solution though. Every time I open Claude Code, I hit Command+C and run claude --dangerously-skip-permissions. It's not as dangerous as it sounds — think of it as Cursor's old yolo mode. Could a rogue agent theoretically run a destructive command?
🌐
Reddit
reddit.com › r/claudeai › 25 top tips for claude code
r/ClaudeAI on Reddit: 25 top tips for Claude Code
September 15, 2025 -

I've been putting together a list of tips for how to use Claude Code. What would you add or remove? (I guess I'll edit this post with suggestions as they come in).

Small context

  • Keep conversations small+focused. After 60k tokens, start a new conversation.

CLAUDE.md files

  • Use CLAUDE.md to tell Claude how you want it to interact with you

  • Use CLAUDE.md to tell Claude what kind of code you want it to produce

  • Use per-directory CLAUDE.md files to describe sub-components.

  • Keep per-directory CLAUDE.md files under 100 lines

  • Reminder to review your CLAUDE.md and keep it up to date

  • As you write CLAUDE.md, stay positive! Tell it what to do, not what not to do.

  • As you write CLAUDE.md, give it a decision-tree of what to do and when

Sub-agents

  • Use sub-agents to delegate work

  • Keep your context small by using sub-agents

  • Use sub-agents for code-review

  • Use sub-agents just by asking! "Please use sub-agents to ..."

Planning

  • Use Shift+Tab for planning mode before Claude starts editing code

  • Keep notes and plans in a .md file, and tell Claude about it

  • When you start a new conversation, tell Claude about the .md file where you're keeping plans+notes

  • Ask Claude to write its plans in a .md file

  • Use markdown files as a memory of a conversation (don't rely on auto-compacting)

  • When Claude does research, have it write down in a .md file

  • Keep a TODO list in a .md file, and have Claude check items off as it does them

Prompting

  • Challenge yourself to not touch your editor, to have Claude do all editing!

  • Ask Claude to review your prompts for effectiveness

  • A prompting tip: have Claude ask you 2 important clarifying questions before it starts

  • Use sub-agents or /new when you want a fresh take, not biased by the conversation so far

MCP

  • Don't have more than 20k tokens of MCP tool descriptions

  • Don't add too many tools: <20 is a sweet spot

🌐
Reddit
reddit.com › r/claudecode › looking for advice before subscribing. which claude plan are you using, and do you use it personally or at work?
r/ClaudeCode on Reddit: Looking for advice before subscribing. Which Claude plan are you using, and do you use it personally or at work?
3 weeks ago -

I'm trying to get a sense of what plans people here are using for Claude and how they fit different use cases.

If you're willing to share:

Which Claude plan are you on (20 / 100 / 200)?

Are you using it for personal projects, within your own company, or as an employee in a larger organization? It's hard for me to decide if someone is hobbyist or actually using it in large, enterprise codebases.

How do the limits feel in practice, especially on the Pro plan?

This week I used the Claude API before and during a workshop and already hit around $24 in usage, so I'm wondering if a subscription might be more cost-effective. At the same time, the reported message limits that many of you share make me a bit unsure... if they hit so hard for the 100 and 200 plans... do I really have to bother with 20

I intend to use it to get hands-on experience at vibe engineering, multiple agents flow and maybe some backlog items, script generations (bash, azure etc).

If you’ve tested both API usage and subscriptions, how do they compare for you in terms of value?

Any insights would help a lot thanks!

EDIT: Thanks to everyone that took the time to share their perspective and opinion. It was really helpful!

I learned that Codex is available for gpt pro users (I currently have such subscription) so I started testing it yesterday. It feels a bit behind in terms of flows, but it's too early for me to judge. At some point I will go with pro plan of claude and compare the results.