Hey,
I wonder how you have made Subagents work for your most effectively yet in Claude Code. I feel like (as always) there have quickly been tons of repos with 50+ Subagents which was kind of similar when RooCode introduced their Custom modes a few months back.
After some first tests people seem to realize that it's not really effective to have just tons of them with some basic instructions and hope they do wonders.
So my question is: What works best for you? What Sub-agents have brought you real improvements so far?
The best things I can currently think of are very project specific. But I'm creating a little Task/Project management system for Claude Code (Simone on Github) and I wonder which more generic agents would work.
Keen to hear what works for you!
Cheers,
Helmi
P.S.: There's also an Issue on Github if you want to chime in there: Link
We've prepared a comprehensive collection of production-ready Claude Code subagents: https://github.com/VoltAgent/awesome-claude-code-subagents
It contains 100+ specialized agents covering the most requested development tasks - frontend, backend, DevOps, AI/ML, code review, debugging, and more. All subagents follow best practices and are maintained by the open-source framework community.
Just copy to .claude/agents/ in your project to start using them.
Videos
You can now create CUSTOM AI AGENTS inside Claude Code that handle specific tasks with their OWN CONTEXT WINDOWS. This is HUGE for anyone building complex projects.
Here's a sub agent I just made that's ALREADY saving me hours - a code refactoring agent that automatically refactor code:
--- name: code-refactoring-specialist description: MUST BE USED for refactoring large files, extracting components, and modularizing codebases. Identifies logical boundaries and splits code intelligently. Use PROACTIVELY when files exceed 500 lines. tools: Read, Edit, Bash, Grep --- You are a refactoring specialist who breaks monoliths into clean modules. When slaying monoliths: 1. Analyze the beast: - Map all functions and their dependencies - Identify logical groupings and boundaries - Find duplicate/similar code patterns - Spot mixed responsibilities 2. Plan the attack: - Design new module structure - Identify shared utilities - Plan interface boundaries - Consider backward compatibility 3. Execute the split: - Extract related functions into modules - Create clean interfaces between modules - Move tests alongside their code - Update all imports 4. Clean up the carnage: - Remove dead code - Consolidate duplicate logic - Add module documentation - Ensure each file has single responsibility Always maintain functionality while improving structure. No behavior changes!
What sub agents are y'all building??? Drop yours below
I was just speaking with a friend two days ago about how awesome agents would be. The MCP tools get sparingly followed/used so I always struggled with them, but agents are baked into the core of Claude Code, and it utilizes them exceptionally well!
I had an idea, to first analyze everything on a project that really annoyed me with Claude continuously, and build an agent around that!
1. Find what annoys the shit out of you
Check all my chat history for this project (/Users/<username>/.claude.json and look for what appears to be commonalities in frustration with claude code, identify a top 10 list of things that keep coming up.
It didn't work unless I told it where my claude json file was (which has chat history)!
2. Enter command "/agents" to create an agent
I actually created one called "quality control" and then pasted the above into it, asking it to create an agent that assess stop-points of code for these key frustrations.
I also made one for "CLAUDE.md checker" which ensures adherence to CLAUDE.md by reading the file and ensuring adherence by recent changes.
3. Add instructions to CLAUDE.md
I used this prompt:
Evaluate the agents available to you and add instructions on usage to CLAUDE.md. Make sure that the end of EVERY to do list is to use the CLAUDE.md checker, and ensure each stop point or new feature utilizes the quality control agent.
...and voila!
I'm just happy I have something Claude Code actually follows now, rather than skips like MCP tools were. I think having the CLAUDE.md checker at the end ensures it is always feeding the rules back into the code, also.
4. BONUS: I added this later on
Sometimes the agents do not run, can you strengthen the rules and also in each agent make sure the final output back instructs to IMPORTANT: USE THIS AGENT AGAIN NEXT TIME.
What have you discovered?
I've put a bunch of time trying to convert my old workflows into subagents. I have a decent structure setup:
Each agent is
[name]-operator.mdEach agent has a script file it uses to do stuff
[name]-manager.md
One agent I have built is one that can operate jira, which keeps the issue management outside the main context thread and handles all the particulars around the org's particular config.
When I had this as a workflow, using the same set of functions available in a shell file, Claude Code did pretty well. It was fairly efficient and worked.
But having ported this to a sub agent, I'm seeing these super long Cooking... times that don't make sense in terms of how the model is running fine on other sessions (no lag).
They're also consuming what seems like vast amounts of tokens to handle these very simple tasks.
On top of this, we have the sub-agent invoking a sub-agent bug which is causing agents to lock indefinitely and even break the terminal session. See this post.
Any feedback here? I'm frustrated by how CC is obscuring the stdout and stderr of agents. I have no idea if it is trying the same stuff or what.
At this time, I'd say people should not yet try to adopt sub agents--they aren't ready for primetime.
Just a heads-up for anyone using Claude Code: it doesnโt automatically spin up subagents. You have to explicitly tell it in your prompt if you want it to use them.
I learned this the hard way. I was expecting multi-agent behavior out of the box, but it turns out itโs fully prompt-driven.
Any practical guides?
I took the course on deeplearning.ai to work with claude more efficiently, since i use it almost everyday now...
When and where do you use hooks and subagents? I would like to knwo some practical use-cases.
Hey claude coders, I keep seeing videos and posts of people adding 10+ subagents to their projects. With all honesty, I am not seeing a great value add. Are they just flexing?
Has anyone actually used subagents for more than 2 days and can confirm it speeds up your dev process? Real talk needed.
If you've been coding since before the Vibe-coding era, you probably already give Claude very specific, architecturally thought-out tasks with links to relevant files and expected types. Plus opening 3-5 terminal windows for different tasks already works great.
Frontend subagent? Claude Code already knows my styling when building on existing projects.
Subagent for backend functions? CC sees how I coded other endpoints and follows the structure
Somebody please convince me to use subagents. What productivity gains am I actually missing here?
Claude Code custom slash command /typescript-checks utilising Claude Code's new subagents https://docs.anthropic.com/en/docs/claude-code/sub-agents ran for nearly 2.5hrs fixing and verifying fixes and pushing ccusage reported 887K tokens/min!
I ended up creating 49 subagents with help of Claude Code converting my existing custom slash command's parallel agents into subagents. I created first two manually via /agents process and then told Claude code to automate the remaining 47 subagents' creation following the template of the first two.
Claude Code subagents in action completed Claude Code slash command using subagentsClaude Code now supports subagents, so I tried something fun.
I set them up using the OODA loop.
(Link to my .md files https://github.com/al3rez/ooda-subagents)
Instead of one agent trying to do everything, I split the work:
-
one to observe
-
one to orient
-
one to decide
-
one to act
Each one has a clear role, and the context stays clean. Feels like a real team.
The OODA loop was made for fighter pilots, but it works surprisingly well for AI workflows too.
Only one issue is that it's slower but more accurate.
Feel free to try it!
I had seen these "tasks" launched before, and I had heard of people talking about sub-agents, but never really put the two together for whatever reason.
I just really learned how to leverage them just a short while ago for a refactoring project for a test Graphrag implementation I am doing in Neo4J, and my god----its amazing!
I probably spun up maybe 40 sub-agents total in this one context window, All with roughly this level of token use that you seen in this picture.
The productivity is absolutely wild.
My mantra is always "plan plan plan, and when you're done planning--do more planning about each part of your plan."
Which is exactly how you get the most out of these sub agents it seems like! PLAN and utilize sub-agents people!
I've seen lots of posts examining running Claude instances in multiagent frameworks to emulate an full dev team and such.
I've read the experiences of people who've found their Claude instances have gone haywire and outright hallucinated or "lied" or outright fabricated that it has done task X or Y or has done code for X and Z.
I believe that we are overlooking an salient and important feature that is being underutilised which is the Claude subagents. Claude's official documentation highlights when we should be invoking subagents (for complex tasks, verifying details or investigating specific problems and reviewing multiple files and documents) + for testing also.
I've observed my context percentage has lasted vastly longer and the results I'm getting much much more better than previous use.
You have to be pretty explicit in the subagent invocation " use subagents for these tasks " ," use subagents for this project" invoke it multiple times in your prompt.
I have also not seen the crazy amount of virtual memory being used anymore either.
I believe the invocation allows Claude to either use data differently locally by more explicitly mapping the links between information or it's either handling the information differently at the back end. Beyond just spawning multiple subagents.
( https://www.anthropic.com/engineering/claude-code-best-practices )
Now you can create your own custom AI agent team.
For example, an agent for planning, one for coding, one for testing/reviewing etc.
Just type /agents to start.
Did anyone try it yet?
Microcompact clears old tool calls to extend your session length, triggering automatically when context grows long. This helps you work longer without needing to run a full /compact command and losing important project context.
You can now @-mention subagents to ensure they get called, and select which model each subagent uses. Choose Opus 4 for complex planning or Haiku 3.5 for lighter tasks.
Claude Code can also now read PDFs directly from your file system.
All features available now. Restart Claude Code to update.
Iโve been experimenting with Claude Code sub-agents and found them really useful โ but thereโs no proper orchestration between them. They work in isolation, which makes it hard to build complex features cleanly.
So I built this:
๐ง awesome-claude-agents โ a full AI development team that works like a real dev shop.
Each agent has a specialty โ backend, frontend, API, ORM, state management, etc. When you say something like:
You donโt just get generic boilerplate. You get:
Tech Lead coordinating the job
Analyst detecting your stack (say Django + React)
Backend/Frontend specialists implementing best practices
API architect mapping endpoints
Docs & Performance agents cleaning things up
๐ฏ Goal: More production-ready results, better code quality, and faster delivery โ all inside Claude.
โ Quick Start:
git clone https://github.com/vijaythecoder/awesome-claude-agents.git cp -r awesome-claude-agents/agents ~/.claude/
Then run the following in your project:
claude "Use team-configurator to set up my AI development team"
Now Claude uses 26 agents in parallel to build your features.
๐ GitHub: https://github.com/vijaythecoder/awesome-claude-agents
Happy to answer questions or take feedback. Looking for early adopters, contributors, and ideas on how to grow this further.
Let me know what you think.
Iโve been experimenting with Claude Code sub-agents and found them really useful โ but thereโs no proper orchestration between them. They work in isolation, which makes it hard to build complex features cleanly.
So I built this:
๐ง awesome-claude-agents โ a full AI development team that works like a real dev shop.
Each agent has a specialty โ backend, frontend, API, ORM, state management, etc. When you say something like:
You donโt just get generic boilerplate. You get:
Tech Lead coordinating the job
Analyst detecting your stack (say Django + React)
Backend/Frontend specialists implementing best practices
API architect mapping endpoints
Docs & Performance agents cleaning things up
๐ฏ Goal: More production-ready results, better code quality, and faster delivery โ all inside Claude.
โ Quick Start:
git clone https://github.com/vijaythecoder/awesome-claude-agents.git cp -r awesome-claude-agents/agents ~/.claude/
Then run the following in your project:
claude "Use team-configurator to set up my AI development team"
Now Claude uses 26 agents in parallel to build your features.
๐ GitHub: https://github.com/vijaythecoder/awesome-claude-agents
Happy to answer questions or take feedback. Looking for early adopters, contributors, and ideas on how to grow this further.
Let me know what you think.
massive user of claude code, almost all day as a senior developer.
anyone using the sub agents and really benefitting would love to know how you are utilising them and how they are benefitting you.
TIA
I created a set of agents for Claude that automatically delegate
tasks between different AI models based on what you're trying to do.
The interesting part: you can access GPT-5 for free through Cursor's integration. When you use these agents, Claude
automatically routes requests to Cursor Agent (which has GPT-5) or Gemini based on the task scope.
How it works:
- Large codebase analysis โ Routes to Gemini (2M token context)
- Focused debugging/development โ Routes to GPT-5 via Cursor
- Everything gets reviewed by Claude before implementation
I made two versions:
- Soft mode: External AI only analyzes, Claude implements all code changes (safe for production)
- Hard mode: External AI can directly modify your codebase (for experiments/prototypes)
Example usage:
u/gemini-gpt-hybrid analyze my authentication system and fix the security issues
This will use Gemini to analyze your entire auth flow, GPT-5 to generate fixes for specific files, and Claude to implement the
changes safely.
Github: https://github.com/NEWBIE0413/gemini-gpt-hybrid