Hey,
I wonder how you have made Subagents work for your most effectively yet in Claude Code. I feel like (as always) there have quickly been tons of repos with 50+ Subagents which was kind of similar when RooCode introduced their Custom modes a few months back.
After some first tests people seem to realize that it's not really effective to have just tons of them with some basic instructions and hope they do wonders.
So my question is: What works best for you? What Sub-agents have brought you real improvements so far?
The best things I can currently think of are very project specific. But I'm creating a little Task/Project management system for Claude Code (Simone on Github) and I wonder which more generic agents would work.
Keen to hear what works for you!
Cheers,
Helmi
P.S.: There's also an Issue on Github if you want to chime in there: Link
You can now create CUSTOM AI AGENTS inside Claude Code that handle specific tasks with their OWN CONTEXT WINDOWS. This is HUGE for anyone building complex projects.
Here's a sub agent I just made that's ALREADY saving me hours - a code refactoring agent that automatically refactor code:
--- name: code-refactoring-specialist description: MUST BE USED for refactoring large files, extracting components, and modularizing codebases. Identifies logical boundaries and splits code intelligently. Use PROACTIVELY when files exceed 500 lines. tools: Read, Edit, Bash, Grep --- You are a refactoring specialist who breaks monoliths into clean modules. When slaying monoliths: 1. Analyze the beast: - Map all functions and their dependencies - Identify logical groupings and boundaries - Find duplicate/similar code patterns - Spot mixed responsibilities 2. Plan the attack: - Design new module structure - Identify shared utilities - Plan interface boundaries - Consider backward compatibility 3. Execute the split: - Extract related functions into modules - Create clean interfaces between modules - Move tests alongside their code - Update all imports 4. Clean up the carnage: - Remove dead code - Consolidate duplicate logic - Add module documentation - Ensure each file has single responsibility Always maintain functionality while improving structure. No behavior changes!
What sub agents are y'all building??? Drop yours below
Videos
We've prepared a comprehensive collection of production-ready Claude Code subagents: https://github.com/VoltAgent/awesome-claude-code-subagents
It contains 100+ specialized agents covering the most requested development tasks - frontend, backend, DevOps, AI/ML, code review, debugging, and more. All subagents follow best practices and are maintained by the open-source framework community.
Just copy to .claude/agents/ in your project to start using them.
Claude Code custom slash command /typescript-checks utilising Claude Code's new subagents https://docs.anthropic.com/en/docs/claude-code/sub-agents ran for nearly 2.5hrs fixing and verifying fixes and pushing ccusage reported 887K tokens/min!
I ended up creating 49 subagents with help of Claude Code converting my existing custom slash command's parallel agents into subagents. I created first two manually via /agents process and then told Claude code to automate the remaining 47 subagents' creation following the template of the first two.
Claude Code subagents in action completed Claude Code slash command using subagentsHey claude coders, I keep seeing videos and posts of people adding 10+ subagents to their projects. With all honesty, I am not seeing a great value add. Are they just flexing?
Has anyone actually used subagents for more than 2 days and can confirm it speeds up your dev process? Real talk needed.
If you've been coding since before the Vibe-coding era, you probably already give Claude very specific, architecturally thought-out tasks with links to relevant files and expected types. Plus opening 3-5 terminal windows for different tasks already works great.
Frontend subagent? Claude Code already knows my styling when building on existing projects.
Subagent for backend functions? CC sees how I coded other endpoints and follows the structure
Somebody please convince me to use subagents. What productivity gains am I actually missing here?
I was just speaking with a friend two days ago about how awesome agents would be. The MCP tools get sparingly followed/used so I always struggled with them, but agents are baked into the core of Claude Code, and it utilizes them exceptionally well!
I had an idea, to first analyze everything on a project that really annoyed me with Claude continuously, and build an agent around that!
1. Find what annoys the shit out of you
Check all my chat history for this project (/Users/<username>/.claude.json and look for what appears to be commonalities in frustration with claude code, identify a top 10 list of things that keep coming up.
It didn't work unless I told it where my claude json file was (which has chat history)!
2. Enter command "/agents" to create an agent
I actually created one called "quality control" and then pasted the above into it, asking it to create an agent that assess stop-points of code for these key frustrations.
I also made one for "CLAUDE.md checker" which ensures adherence to CLAUDE.md by reading the file and ensuring adherence by recent changes.
3. Add instructions to CLAUDE.md
I used this prompt:
Evaluate the agents available to you and add instructions on usage to CLAUDE.md. Make sure that the end of EVERY to do list is to use the CLAUDE.md checker, and ensure each stop point or new feature utilizes the quality control agent.
...and voila!
I'm just happy I have something Claude Code actually follows now, rather than skips like MCP tools were. I think having the CLAUDE.md checker at the end ensures it is always feeding the rules back into the code, also.
4. BONUS: I added this later on
Sometimes the agents do not run, can you strengthen the rules and also in each agent make sure the final output back instructs to IMPORTANT: USE THIS AGENT AGAIN NEXT TIME.
What have you discovered?
I've seen lots of posts examining running Claude instances in multiagent frameworks to emulate an full dev team and such.
I've read the experiences of people who've found their Claude instances have gone haywire and outright hallucinated or "lied" or outright fabricated that it has done task X or Y or has done code for X and Z.
I believe that we are overlooking an salient and important feature that is being underutilised which is the Claude subagents. Claude's official documentation highlights when we should be invoking subagents (for complex tasks, verifying details or investigating specific problems and reviewing multiple files and documents) + for testing also.
I've observed my context percentage has lasted vastly longer and the results I'm getting much much more better than previous use.
You have to be pretty explicit in the subagent invocation " use subagents for these tasks " ," use subagents for this project" invoke it multiple times in your prompt.
I have also not seen the crazy amount of virtual memory being used anymore either.
I believe the invocation allows Claude to either use data differently locally by more explicitly mapping the links between information or it's either handling the information differently at the back end. Beyond just spawning multiple subagents.
( https://www.anthropic.com/engineering/claude-code-best-practices )
I had seen these "tasks" launched before, and I had heard of people talking about sub-agents, but never really put the two together for whatever reason.
I just really learned how to leverage them just a short while ago for a refactoring project for a test Graphrag implementation I am doing in Neo4J, and my god----its amazing!
I probably spun up maybe 40 sub-agents total in this one context window, All with roughly this level of token use that you seen in this picture.
The productivity is absolutely wild.
My mantra is always "plan plan plan, and when you're done planning--do more planning about each part of your plan."
Which is exactly how you get the most out of these sub agents it seems like! PLAN and utilize sub-agents people!
Any practical guides?
I took the course on deeplearning.ai to work with claude more efficiently, since i use it almost everyday now...
When and where do you use hooks and subagents? I would like to knwo some practical use-cases.
Ready to transform Claude Code from a smart generalist into a powerhouse team of AI specialists? 🚀
I'm thrilled to share - Claude Code Subagents, a collection of 35 specialized AI agents designed to supercharge your development workflows.
Instead of a single AI, imagine an orchestrated team of experts automatically delegated to tasks based on context. This collection extends Claude's capabilities across the entire software development lifecycle.
Key Features:
🤖 Intelligent Auto-Delegation: Claude automatically selects the right agent for the job.
🔧 Deep Domain Expertise: 35 agents specializing in everything from backend-architecture and security-auditing to react-pro and devops-incident-responder.
🔄 Seamless Orchestration: Agents collaborate on complex tasks, like building a feature from architecture design to security review and testing.
📊 Built-in Quality Gates: Leverage agents like code-reviewer and qa-expert to ensure quality and robustness.
Whether you're designing a RESTful API, optimizing a database, debugging a production incident, or refactoring legacy code, there’s a specialist agent ready to help.
Check out the full collection of 35 agents on GitHub! I'd appreciate a star ⭐ if you find it useful, and contributions are always welcome.
GitHub Repo: https://github.com/lst97/claude-code-sub-agents
I've put a bunch of time trying to convert my old workflows into subagents. I have a decent structure setup:
Each agent is
[name]-operator.mdEach agent has a script file it uses to do stuff
[name]-manager.md
One agent I have built is one that can operate jira, which keeps the issue management outside the main context thread and handles all the particulars around the org's particular config.
When I had this as a workflow, using the same set of functions available in a shell file, Claude Code did pretty well. It was fairly efficient and worked.
But having ported this to a sub agent, I'm seeing these super long Cooking... times that don't make sense in terms of how the model is running fine on other sessions (no lag).
They're also consuming what seems like vast amounts of tokens to handle these very simple tasks.
On top of this, we have the sub-agent invoking a sub-agent bug which is causing agents to lock indefinitely and even break the terminal session. See this post.
Any feedback here? I'm frustrated by how CC is obscuring the stdout and stderr of agents. I have no idea if it is trying the same stuff or what.
At this time, I'd say people should not yet try to adopt sub agents--they aren't ready for primetime.
I've messed with most of the big LLMs out there, and I keep coming back to Claude Code. Subjectively, it has always felt to me so much smarter than all the rest. It took me until this morning to work out a theory about why. It has nothing to do with context window size or training data or how Claude "thinks," and everything to do with an emergent property arising from the interaction of how Claude uses agents and how I think/work. I’m not sure this is ultimately correct at the level of Claude’s specific code, but it’s definitely an interesting heuristic that I’m going to experiment with a lot.
To understand where I'm coming from, I should note that I vastly prefer thinking about systems design and computer science to writing line-level code (I'm also far better at the former than the latter). In practice, this means that I understand my codebase quite well, have a lot of practice with task-decomposition, and many of my AI calls boil down to "fix this problem in this component in this way."
We know that the "intelligence" of an LLM is more or less inversely proportional to how full its context window is. Claude Code doesn't have the largest window; however, by architecture, each sub-agent has its own context, which is held separate from Claude Prime. This means that for short-swing, well-defined one-and-done tasks, if you assign them to a sub-agent, you can get it done without cluttering up the main context window. For example, I’ve gotten great results with tasks such as “fix this CSS error,” “re-validate that JSON,” “check the copy in this doc,” “re-architect this pipeline to use a different file format,” “figure out why X test is failing,“ etc. This lets the main Claude instance stay more focused, compact less, and—effectively—be smarter.
The challenge, of course, is that you have to understand your codebase and the specific task well enough to carve it out into something small enough for the sub-agents to handle it. Claude is designed do this for itself and invoke its own subagents, but I've found the results occasionally inconsistent, especially if you make the mistake of asking the agents to work in parallel on the same files or on closely-interdependent tasks. Don’t ask me how many times I’ve made this error, the answer is embarrassing. I’ve gotten far better results when I act like the project manager and portion out the work intentionally.
I'm really curious if any of you all have had a similar experiences. The lesson might be that fixating on the absolute power of an LLM might be a bit of a red herring, and a better analysis might start from figuring out which LLM is best interacts with your personal habits of mind and work style.
If you're reading this and haven't given sub-agents a shot, I really can't recommend it highly enough: https://docs.claude.com/en/docs/claude-code/sub-agents
The recent Claude Code v2.0.60 introduced resumable subagents. They didn't advertise this (they only advertised background agents), but here's what you can now do. Type the following prompt into Claude:
I'd like to learn more about subagents. Please could you help me experiment with them?
(1) Use the Task tool to run a background subagent "Why is blood red?", then use AgentOutputTool to wait until it's done.
(2) Use the Task tool to resume that agent and ask it "What other colors might it be?", and tell me its verbatim output. Thank you!
These resumable agents aren't much use yet. But I think that once they fix bugs, then they'll become hugely important. We've already seen lots of people hack together "agent round-table" solutions, where agents interact with each other on an ongoing basis. Once Anthropic fix the bugs, then these things will be supported first class within Claude Code.
Bugs?
Crucially, subagent transcripts only include assistant responses, not prompts. So when you resume a subagent it'll give odd output like "Oh that's strange! I told you about why blood is red but it appears you didn't even ask me that!" I don't think the feature can be used well without this.
The AgentOutputTool tool takes a single agentId as parameter, but its output can show the status of multiple subagents. That'll be useful for multi-agent round-tables. I hope they'll let it take a list or wildcard for all subagents.
There's not yet a
<system-reminder/>for subagents being ready, like there is with BashOutputTool. I'm sure they'll add thisIt's a bit irritating that you can't obtain an agentId without using run-in-background. But I guess we can live with that. I suspect the PostToolUseHook shows agentId though (since it appears in the transcript as part of the rich json-structured output of the Task tool)
The "resumable" parameter was actually released in v2.0.28 on October27th, but there was no way to provide it an agentId value until now by kicking off a background task. (Well, there was, but you had to do it yourself by reading the ~/.claude/projects/DIR/agents-*.jsonl files)
Hey claude coders, I keep seeing videos and posts of people adding 10+ subagents to their projects. With all honesty, I am not seeing a great value add. Are they just flexing?
Has anyone actually used subagents for more than 2 days and can confirm it speeds up your dev process? Real talk needed.
If you've been coding since before the Vibe-coding era, you probably already give Claude very specific, architecturally thought-out tasks with links to relevant files and expected types. Plus opening 3-5 terminal windows for different tasks already works great.
Frontend subagent? Claude Code already knows my styling when building on existing projects.
Subagent for backend functions? CC sees how I coded other endpoints and follows the structure
Somebody please convince me to use subagents. What productivity gains am I actually missing here?
Just a heads-up for anyone using Claude Code: it doesn’t automatically spin up subagents. You have to explicitly tell it in your prompt if you want it to use them.
I learned this the hard way. I was expecting multi-agent behavior out of the box, but it turns out it’s fully prompt-driven.
I've been using Claude Code for 2 months now and really exploring different workflows and setups. While I love the tool overall, I keep reverting to vanilla configurations with basic slash commands.
My main issue:
Sub-agents lose context when running in the background, which breaks my workflow.
What I've tried:
Various workflow configurations
Different sub-agent setups
Multiple approaches to maintaining context
Despite my efforts, I can't seem to get sub-agents to maintain proper context throughout longer tasks.
Questions:
Is anyone successfully using sub-agents without context loss?
What's your setup if you've solved this?
Should I just stick with the stock configuration?
Would love to hear from others who've faced (and hopefully solved) this issue!
Hi,
Anyone know how to prevent Claude Code from creating subagents ? I understand this may be an useful feature but in its current form it's just a waste of time and tokens.
I gave it a PHP error to fix which was very trivial, basically it swapped the string and the database handler in a function call. Everything was in the error message I give it:
PHP Fatal error: Uncaught TypeError: sql_escape_string(): Argument #1 ($db) must be of type mysqli, string given, called in *****.php on line 129 and defined in *****.php:60
Yet it spawned an agent that ate 37k tokens and 3 minutes to find the issue:
● Plan(Investigate PHP TypeError) ⎿ Done (19 tool uses · 37.7k tokens · 2m 49s)
That just doesn't make any sense and I would prefer disabling this subagent behavior instead of burning so much time and money just to spare some context by reading file portions in a subagent.
Anyone know how to disable it ? Didn't found anything in the options.
massive user of claude code, almost all day as a senior developer.
anyone using the sub agents and really benefitting would love to know how you are utilising them and how they are benefitting you.
TIA
I've been working with Claude's sub-agent capabilities and created a collection of commands that orchestrate multiple specialized AI agents to handle complex software engineering tasks.
This is only a handful of my command scripts for using sub agents within CC. I Think of it as having a team of AI specialists working in parallel on your codebase.
I posted about this a couple of days ago and I wasn't able to paste in a code example and decided to put in 5 of my sub agent code commands and what they do.
These are very advanced command set ups for sub agents, please use them with care and watch your tokens... :)
https://github.com/derek-opdee/subagent-example-script
What it does:
Tech Debt Finder & Fixer (@tech-debt-finder-fixer)
5 agents analyze your code simultaneously for duplicates, complexity, dead code, etc.
Creates a prioritized fix plan with time estimates
Safely refactors with automated testing
Architecture Reviewer (@arch-review)
Maps dependencies and detects circular references
Generates Mermaid/PlantUML diagrams
Finds layer violations and anti-patterns
Test Generator (@test-gen)
Analyses code paths to find untested areas
Generates unit, integration, and E2E tests
Follows your existing test patterns
Performance Optimizer (@perf-optimize)
Frontend bundle analysis and React optimisation
Database query optimisation with index suggestions
Full-stack performance improvements
Migration Assistant (@migrate)
Helps migrate between framework versions (React 17→18, Laravel 8→10, etc.)
Runs codemods and handles manual updates
Incremental migration with rollback support
How it works:
# Example: Find and fix tech debt @tech-debt-finder-fixer src/ --dry-run # Example: Optimize React performance @perf-optimize --target frontend --bundle-analysis
Multiple specialised agents work in parallel:
Analysis agents examine different aspects simultaneously
Synthesis agent combines findings into actionable insights
Implementation agents execute approved changes
Verification agent ensures everything works
!! IMPORTANT TOKEN WARNING !!
These commands use significant tokens because they:
Spawn multiple parallel agents
Analyse entire codebases
Generate detailed reports
Implement complex changes
Use these when you REALLY need them - like before a major refactor, performance audit, or framework migration. Not for everyday coding!
Repo: https://github.com/derek-opdee/subagent-example-script
The commands demonstrate how to coordinate multiple AI agents for complex tasks. Each agent has specialized expertise, and they share findings to make better decisions.
Would love to hear if anyone tries these out or has ideas for other multi-agent commands!