We've prepared a comprehensive collection of production-ready Claude Code subagents: https://github.com/VoltAgent/awesome-claude-code-subagents
It contains 100+ specialized agents covering the most requested development tasks - frontend, backend, DevOps, AI/ML, code review, debugging, and more. All subagents follow best practices and are maintained by the open-source framework community.
Just copy to .claude/agents/ in your project to start using them.
I've seen people ask about agents, you can ask claude what agents would be useful, its how I built out mine. I would have it go through a workflow, then when It was done I'd ask what dedicated agents would help. It ended up suggesting the below agents, which they do get tiggered in most cases after I decided to do this. I have had the same process going for awhile now and its been doing things in way I like for the most part and the context in the main instance barely climbs. I do have concerns when it want to run them all in parallel sometimes, but so far I haven't hit a limit on the 5x plan yet. I like the idea of organically growing and adjusting them with claude itself reviewing them as I think that increases the chance of it using them.
Example Workflow:
User: "Add user authentication to FocusFlow"
Orchestration:
api-design-specialist: Design auth endpoints with proper versioning
database-specialist: Design user schema with security best practices
security-guardian: Implement JWT token handling and validation
frontend-specialist: Build login/register UI with accessibility
devops-automation-specialist: Add auth to CI/CD pipeline testing
test-strategist: Create comprehensive auth test suite
code-review-qa: Review complete authentication implementation
This gives you precise control over each aspect while maintaining the orchestration model that's working so well in yourcurrent setup.
Videos
I was just speaking with a friend two days ago about how awesome agents would be. The MCP tools get sparingly followed/used so I always struggled with them, but agents are baked into the core of Claude Code, and it utilizes them exceptionally well!
I had an idea, to first analyze everything on a project that really annoyed me with Claude continuously, and build an agent around that!
1. Find what annoys the shit out of you
Check all my chat history for this project (/Users/<username>/.claude.json and look for what appears to be commonalities in frustration with claude code, identify a top 10 list of things that keep coming up.
It didn't work unless I told it where my claude json file was (which has chat history)!
2. Enter command "/agents" to create an agent
I actually created one called "quality control" and then pasted the above into it, asking it to create an agent that assess stop-points of code for these key frustrations.
I also made one for "CLAUDE.md checker" which ensures adherence to CLAUDE.md by reading the file and ensuring adherence by recent changes.
3. Add instructions to CLAUDE.md
I used this prompt:
Evaluate the agents available to you and add instructions on usage to CLAUDE.md. Make sure that the end of EVERY to do list is to use the CLAUDE.md checker, and ensure each stop point or new feature utilizes the quality control agent.
...and voila!
I'm just happy I have something Claude Code actually follows now, rather than skips like MCP tools were. I think having the CLAUDE.md checker at the end ensures it is always feeding the rules back into the code, also.
4. BONUS: I added this later on
Sometimes the agents do not run, can you strengthen the rules and also in each agent make sure the final output back instructs to IMPORTANT: USE THIS AGENT AGAIN NEXT TIME.
What have you discovered?
Ready to transform Claude Code from a smart generalist into a powerhouse team of AI specialists? ๐
I'm thrilled to share - Claude Code Subagents, a collection of 35 specialized AI agents designed to supercharge your development workflows.
Instead of a single AI, imagine an orchestrated team of experts automatically delegated to tasks based on context. This collection extends Claude's capabilities across the entire software development lifecycle.
Key Features:
๐ค Intelligent Auto-Delegation: Claude automatically selects the right agent for the job.
๐ง Deep Domain Expertise: 35 agents specializing in everything from backend-architecture and security-auditing to react-pro and devops-incident-responder.
๐ Seamless Orchestration: Agents collaborate on complex tasks, like building a feature from architecture design to security review and testing.
๐ Built-in Quality Gates: Leverage agents like code-reviewer and qa-expert to ensure quality and robustness.
Whether you're designing a RESTful API, optimizing a database, debugging a production incident, or refactoring legacy code, thereโs a specialist agent ready to help.
Check out the full collection of 35 agents on GitHub! I'd appreciate a star โญ if you find it useful, and contributions are always welcome.
GitHub Repo: https://github.com/lst97/claude-code-sub-agents
I've tried all these coding agents. I've been using Cursor since day one, and at this point, I've just locked into Claude Code $200 Max plan. I tried the Roo Code/Cline hype but was spending like $100 a day, so it wasn't sustainable. Although, I know you can get free Gemini credits now. I also have an Augment Code subscription, but I don't use it much. I'm keeping it because it's the grandfathered $30 a month plan. Besides that, I still run Cursor as my IDE because I still think Cursor Tab is good and it's basically free, so I use it. But yeah, I feel like most of these tools will die, and Claude Code will be the de facto tool for professionals.
Iโve been experimenting with Claude Code sub-agents and found them really useful โ but thereโs no proper orchestration between them. They work in isolation, which makes it hard to build complex features cleanly.
So I built this:
๐ง awesome-claude-agents โ a full AI development team that works like a real dev shop.
Each agent has a specialty โ backend, frontend, API, ORM, state management, etc. When you say something like:
You donโt just get generic boilerplate. You get:
Tech Lead coordinating the job
Analyst detecting your stack (say Django + React)
Backend/Frontend specialists implementing best practices
API architect mapping endpoints
Docs & Performance agents cleaning things up
๐ฏ Goal: More production-ready results, better code quality, and faster delivery โ all inside Claude.
โ Quick Start:
git clone https://github.com/vijaythecoder/awesome-claude-agents.git cp -r awesome-claude-agents/agents ~/.claude/
Then run the following in your project:
claude "Use team-configurator to set up my AI development team"
Now Claude uses 26 agents in parallel to build your features.
๐ GitHub: https://github.com/vijaythecoder/awesome-claude-agents
Happy to answer questions or take feedback. Looking for early adopters, contributors, and ideas on how to grow this further.
Let me know what you think.
Iโve been experimenting with Claude Code sub-agents and found them really useful โ but thereโs no proper orchestration between them. They work in isolation, which makes it hard to build complex features cleanly.
So I built this:
๐ง awesome-claude-agents โ a full AI development team that works like a real dev shop.
Each agent has a specialty โ backend, frontend, API, ORM, state management, etc. When you say something like:
You donโt just get generic boilerplate. You get:
Tech Lead coordinating the job
Analyst detecting your stack (say Django + React)
Backend/Frontend specialists implementing best practices
API architect mapping endpoints
Docs & Performance agents cleaning things up
๐ฏ Goal: More production-ready results, better code quality, and faster delivery โ all inside Claude.
โ Quick Start:
git clone https://github.com/vijaythecoder/awesome-claude-agents.git cp -r awesome-claude-agents/agents ~/.claude/
Then run the following in your project:
claude "Use team-configurator to set up my AI development team"
Now Claude uses 26 agents in parallel to build your features.
๐ GitHub: https://github.com/vijaythecoder/awesome-claude-agents
Happy to answer questions or take feedback. Looking for early adopters, contributors, and ideas on how to grow this further.
Let me know what you think.
Hi,
could someone explain how the /agents mode in Claude Code actually works? Iโm wondering if itโs more of a coding sandbox step-by-step, or closer to autonomous agents handling tasks. What are your experiences with using this mode?
Hey
So I've been going down the Claude Code rabbit hole (yeah, I've been seeing the ones shouting out to Gemini, but with proper workflow and prompts, Claude Code works for me, at least so far), and apparently, everyone and their mom has built a "framework" for it. Found these four that keep popping up:
SuperClaude
BMAD
Claude Flow
Awesome Claude
Some are just persona configs, others throw in the whole kitchen sink with MCP templates and memory structures. Cool.
The real kicker is Anthropic just dropped sub-agents, which basically makes the whole /command thing obsolete. Sub-agents get their own context window, so your main agent doesn't get clogged with random crap. It obviously has downsides, but whatever.
Current state of sub-agent PRs:
SuperClaude: crickets
BMAD: PR #359
Claude Flow: Issue #461
Awesome Claude: PR #72
So... which one do you actually use? Not "I starred it on GitHub and forgot about it" but like, actually use for real work?
At first, I thought Sonnet and Opus 4 would only be like 3.8 since their benchmark scores are meh. But since I bought a Claude Max subscription, I got to try their code agent Claude Code. I'm genuinely shocked by how good it is after some days of use. It really gives me the vibe of the first GPT-4: it's like an actual coworker instead of an advanced autocomplete machine.
The Opus 4 in Claude Code knows how to handle medium-sized jobs really well. For example, if I ask Cursor to add a neural network pipeline from a git repo, it will first search, then clone the repo, write code and run.
And boomโmissing dependencies, failed GPU config, wrong paths, reinventing wheels, mock data, and my code is a mess.
But Opus 4 in Claude Code nails it just like an engineer would. It first reviews its memory about my codebase, then fetches the repo to a temporary dir, reads the readme, checks if dependencies exist and GPU versions match, and maintains a todo list. It then looks into the repo's main script to properly set up a script that invokes the function correctly.
Even when I interrupted it midway to tell it to use uv instead of conda, it removed the previous setup and switched to uv while keeping everything working. Wow.
I really think Anthropic nailed it and Opus 4 is a huge jump that's totally underrated by this sub.
Hey everyone, I've been following all the sub-agent discussions here lately and wanted to share something I built to solve my own frustration.
Like many of you, I kept hitting the same wall: my agent would solve a bug perfectly on Tuesday, then act like it had never seen it before on Thursday. The irony? Claude saves every conversation in ~/.claude/projects - 10,165 sessions in my case - but never uses them. Claude.md and reminders were of no help.
So I built a sub-agent that actually reads them.
How it works:
A dedicated memory sub-agent (Reflection agent) searches your past Claude conversations
Uses semantic search with 90-day half-life decay (fresh bugs stay relevant, old patterns fade)
Surfaces previous solutions and feeds them to your main agent
Currently hitting 66.1% search accuracy across my 24 projects
The "aha" moment: I was comparing mem0, zep, and GraphRAG for weeks, building elaborate memory architectures. Meanwhile, the solution was literally sitting in my filesystem. The sub-agent found it while I was still designing the question.
Why I think this matters for the sub-agent discussion: Instead of one agent trying to hold everything in context (and getting dumber as it fills), you get specialized agents: one codes, one remembers. They each do one thing well.
Looking for feedback on:
Is 66.1% accuracy good enough to be useful for others?
What's your tolerance for the 100ms search overhead?
Any edge cases I should handle better?
It's a Python MCP server, 5 minute setup: npm install claude-self-reflect
Here is how it looks:
GitHub: https://github.com/ramakay/claude-self-reflect
Not trying to oversell this - it's basically a sub-agent that searches JSONL files. But it turned my goldfish into something that actually learns from its mistakes. Would love to know if it helps anyone else and most importantly, should we keep working on memory decay - struggling with Qdrant's functions
Update: Thanks to GabrielGrinโข and u/Responsible-Tip4981 ! You caught exactly the pain points I needed to fix.
What's Fixed in v2.3.0:
- Docker detection - setup now checks if Docker is running before proceeding
- Auto-creates logs directory and handles all Python dependencies
- Clear import instructions with real-time progress monitoring
- One-command setup: npx claude-self-reflect handles everything
- Fixed critical bug where imported conversations weren't searchable
Key Improvements:
- Setup wizard now shows live import progress with conversation counts
- Automatically installs and manages the file watcher
- Lowered similarity threshold from 0.7 to 0.3 (was filtering too aggressively)
- Standardized on voyage-3-large embeddings (handles 281MB+ files)
Privacy First: Unlike cloud alternatives, this runs 100% offline. Your conversations never leave your machine - just Docker + local Qdrant.
The "5-minute setup" claim is now actually true. Just tested on a fresh machine:
get a voyage.ai key (you can get others in the future or fallback to local , this works 200m free tokens - no connection with them this article pointed me to them )
npm install -g claude-self-reflect
claude-self-reflect setup
The 66.1% accuracy I mentioned is the embedding model's benchmark, not real-world performance. In practice, I'm seeing much better results with the threshold adjustments.
Thanks again for the thorough testing - this is exactly the feedback that makes open source work!
Update 2 : Please update
(v2.3.7): Local Embeddings & Enhanced Privacy
I am humbled by the activity and feedback about a project that started to improve my personal CC workflow!
Based on community feedback about privacy, I've released v2.3.7 with a major enhancement:
New: Local Embeddings by Default
Now uses FastEmbed (all-MiniLM-L6-v2) for 100% offline operation
Zero API calls, zero external dependencies
Your conversations never leave your machine
Same reflection specialist sub-agent, same search accuracy
Cloud Option Still Available:
If you prefer Voyage AI's superior embeddings (what I personally use), just set
VOYAGE_KEYCloud mode gives better semantic matching for complex queries
Both modes work identically with the reflection sub-agent
Cleaner Codebase:
Removed old TypeScript prototype and test files from the repo
Added CI/CD security scanning for ongoing code quality
Streamlined to just the essential Python MCP server
For existing users: Just run git pull && npm install. Your existing setup continues working exactly as before.
The local-first approach means you can try it without any API keys. If you find the search quality needs improvement for your use case, switching to cloud embeddings is just one environment variable away.
Still solving that same problem - Claude forgetting Tuesday's bug fix by Thursday - but now with complete privacy by default.
Hey everyone, hoping someone can clear this up for me.
I keep seeing "agents" mentioned everywhere, but I don't really get the practical advantage over just using Claude Claude directly.
I know there's documentation, but I'm not looking for the polished marketing examples. I want to hear some real-world use cases. What's a messy, real problem you solved with an agent that you couldn't have easily done with just a good prompt in a single Claude Code instance?
What's the "aha!" moment that made agents click for you?
Hey,
I wonder how you have made Subagents work for your most effectively yet in Claude Code. I feel like (as always) there have quickly been tons of repos with 50+ Subagents which was kind of similar when RooCode introduced their Custom modes a few months back.
After some first tests people seem to realize that it's not really effective to have just tons of them with some basic instructions and hope they do wonders.
So my question is: What works best for you? What Sub-agents have brought you real improvements so far?
The best things I can currently think of are very project specific. But I'm creating a little Task/Project management system for Claude Code (Simone on Github) and I wonder which more generic agents would work.
Keen to hear what works for you!
Cheers,
Helmi
P.S.: There's also an Issue on Github if you want to chime in there: Link
Created a collection of 12 specialized agents for Claude Code CLI that I wanted to share with the community. These are curated from industry-leading AI code generation tools and optimized specifically for Claude Code's new /agent support. Context was taken from https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools for system prompts used by other platforms for agentic development with LLMs.
**Agents included:**
- Backend Specialist - API development, database design, server architecture
- Frontend Specialist - UI/UX implementation, React optimization, responsive design
- DevOps Engineer - CI/CD pipelines, infrastructure automation, cloud platforms
- Security Engineer - Security architecture, vulnerability assessment, compliance
- Enterprise CTO - Strategic technology leadership, enterprise architecture
- Engineering Manager - Team leadership, performance optimization
- Software Architect - System design, technical standards, design patterns
- QA Engineer - Test strategy, automation, quality assurance processes
- Product Owner - Requirements gathering, feature prioritization, stakeholder communication
- Project Manager - Project planning, resource coordination, timeline management
- Senior Fullstack Developer - Complex feature implementation, cross-stack integration
- Technical Writer - Documentation, API specs, knowledge management
**Installation:**
```bash
git clone https://github.com/irenicj/claude-user-memory
cp agents/* ~/.claude/agents/
Anyone else building specialized agent collections? Would love to see what roles the community finds most valuable!
You can now create CUSTOM AI AGENTS inside Claude Code that handle specific tasks with their OWN CONTEXT WINDOWS. This is HUGE for anyone building complex projects.
Here's a sub agent I just made that's ALREADY saving me hours - a code refactoring agent that automatically refactor code:
--- name: code-refactoring-specialist description: MUST BE USED for refactoring large files, extracting components, and modularizing codebases. Identifies logical boundaries and splits code intelligently. Use PROACTIVELY when files exceed 500 lines. tools: Read, Edit, Bash, Grep --- You are a refactoring specialist who breaks monoliths into clean modules. When slaying monoliths: 1. Analyze the beast: - Map all functions and their dependencies - Identify logical groupings and boundaries - Find duplicate/similar code patterns - Spot mixed responsibilities 2. Plan the attack: - Design new module structure - Identify shared utilities - Plan interface boundaries - Consider backward compatibility 3. Execute the split: - Extract related functions into modules - Create clean interfaces between modules - Move tests alongside their code - Update all imports 4. Clean up the carnage: - Remove dead code - Consolidate duplicate logic - Add module documentation - Ensure each file has single responsibility Always maintain functionality while improving structure. No behavior changes!
What sub agents are y'all building??? Drop yours below
Claude Code just feels different. It's the only setup where the best coding model and the product are tightly integrated. "Taste" is thrown around a lot these days, but the UX here genuinely earns it: minimalist, surfaces just the right information at the right time, never overwhelms you.
Cursor can't match it because its harness bends around wildly different models, so even the same model doesn't perform as well there.
Gemini 3 Pro overthinks everything, and Gemini CLI is just a worse product. I'd bet far fewer Google engineers use it compared to Anthropic employees "antfooding" Claude Code.
Codex (GPT-5.1 Codex Max) is a powerful sledgehammer and amazing value at 20$ but too slow for real agentic loops where you need quick tool calls and tight back-and-forth. In my experience, it also gets stuck more often.
Claude Code with Opus 4.5 is the premium developer experience right now. As the makers of CC put it in this interview, you can tell it's built by people who use it every day and are laser focused on winning the "premium" developer market.
I haven't tried Opencode or Factory Droid yet though. Anyone else try them and prefer them to CC?
Now you can create your own custom AI agent team.
For example, an agent for planning, one for coding, one for testing/reviewing etc.
Just type /agents to start.
Did anyone try it yet?
Claude code on the max plan is honestly one of the coolest things I have used Iโm a fan of both it and codex. Together my bill is 400$ but in the last 3 weeks I made 1000 commits and built some complex things.
I attached one of the things Iโm building using Claude a rust based AI native ide.
Any here is my guide to get value out of these agents!
-
Plan, plan, plan and if you think planned enough plan more. Create a concrete PRD for what you want to accomplish. Any thinking model can help here
-
Once plan is done, split into mini surgical tasks fixed scope known outcome. Whenever I break this rule things go bad.
-
Do everything in a isolated fashion, git worktrees, custom docker containers all depends on your median.
-
Ensure you vibe a robust CI/CD ideally your plan required tests to be written and plans them out.
-
Create PRs, review using tools like code rabbit and the many other tools.
-
Have a Claude agent handle merging and resolving conflicts for all your surgical PRs usually should be easy to handle.
-
Trouble shoot any potential missed errors.
Step 8: repeat step 1
Whatโs still missing from my workflow is a tightly coupled E2E tests that runs for each and every single PR. Using this method I hit 1000 commits and most accomplished I have felt in months. Really concrete results and successful projects
I'm just starting to experiment with subagents. I've been through some of the docs from Anthropic on how to best use subagents and have created agents for my projects. Despite creating subagents that I think can take advantage of having their own context and can ask Claude to operate differently than it's default behavior (mostly be checking over it's high level and low level work) I'm not getting much additional benefit out of subagents.
Experienced devs, what subagent structure has worked for you and when do you turn to subagents vs. default Claude Code?
Skills introduced by Anthropic have been getting a lot of traction from Claude users. Within a week of its release, the official repo has over 13k stars and a whole lot of community-built Skills popping up every day. And I really think it has great potential for building efficient agents.
The skills are not particularly an engineering breakthrough; they are Markdown files with custom instructions, bundled with additional scripts. But it's very smart and intuitive for both agents and humans using it. It's reusable and portable.
A standard skills structure contains
YAML front matter: Has the name and descriptions of the skill and <100 tokens, pre-loaded into the LLM context window.
Skills. MD: Contains the main instructions about the skills. ~5k tokens
Resources/bundled files: Optional. can contain code scripts, tool execution descriptions, or subtask files in the case of Skills. MD grows bigger. ~unlimited tokens
How does it work?
Only the YAML frontmatter is loaded onto the context window, which is barely a few hundred tokens; this is pretty token-efficient.
The agent, given the task context, calls the skills and subsequently reads bundled files, where you can mention specific code scripts or the MCP tool to execute. Ideally, this can be made more efficient by adding the MCP tools that are needed for your tasks.
A personal assistant agent can have skills like,
Event management skill: Fetching emails, calendar events and scheduling meetings.
Meeting prep skills: Collects past MoMs from Notion, drive, or Fireflies, researches attendees, and makes slides or docs based on them.
You can use the same skills in Claude, Codex CLI, or with your own custom agent. I am pretty bullish on Skills abstraction; it's simple, cross-platform compatible, and composable. It loads skills when needed, so it doesn't hog context space. Certainly a better way to think about agent workflows.
I would love to know what you think about LLM Skills and whether you have used any that have been particularly helpful to you.
Claude Code now supports subagents, so I tried something fun.
I set them up using the OODA loop.
(Link to my .md files https://github.com/al3rez/ooda-subagents)
Instead of one agent trying to do everything, I split the work:
-
one to observe
-
one to orient
-
one to decide
-
one to act
Each one has a clear role, and the context stays clean. Feels like a real team.
The OODA loop was made for fighter pilots, but it works surprisingly well for AI workflows too.
Only one issue is that it's slower but more accurate.
Feel free to try it!