Agents have their own context independent of the main thread. They can go off and ingest/process large amounts of tokens to research topics and then return only the distilled answer to the question back to the main thread. Reduces the need for compaction which diminishes efficiency of the main thread and leads to more errors. There is also research that shows the more the context fills, the worse the model performs overall. Answer from Nice_Visit4454 on reddit.com
🌐
Reddit
reddit.com › r/claudeai › we prepared a collection of claude code subagents for production-ready workflows.
r/ClaudeAI on Reddit: We prepared a collection of Claude code subagents for production-ready workflows.
August 5, 2025 -

We've prepared a comprehensive collection of production-ready Claude Code subagents: https://github.com/VoltAgent/awesome-claude-code-subagents

It contains 100+ specialized agents covering the most requested development tasks - frontend, backend, DevOps, AI/ML, code review, debugging, and more. All subagents follow best practices and are maintained by the open-source framework community.

Just copy to .claude/agents/ in your project to start using them.

🌐
Reddit
reddit.com › r/claudeai › eli5: what's the actual point of using agents with claude?
r/ClaudeAI on Reddit: ELI5: What's the actual point of using Agents with Claude?
July 26, 2025 -

Hey everyone, hoping someone can clear this up for me.

I keep seeing "agents" mentioned everywhere, but I don't really get the practical advantage over just using Claude Claude directly.

I know there's documentation, but I'm not looking for the polished marketing examples. I want to hear some real-world use cases. What's a messy, real problem you solved with an agent that you couldn't have easily done with just a good prompt in a single Claude Code instance?

What's the "aha!" moment that made agents click for you?

🌐
Reddit
reddit.com › r/claudecode › claude code /agents mode - could someone explain how it works and what are your experiences?
r/ClaudeCode on Reddit: Claude code /agents mode - could someone explain how it works and what are your experiences?
August 19, 2025 -

Hi,
could someone explain how the /agents mode in Claude Code actually works? I’m wondering if it’s more of a coding sandbox step-by-step, or closer to autonomous agents handling tasks. What are your experiences with using this mode?

Top answer
1 of 2
6
The primary purpose of agents is to save token space in your main context by offloading work and only retaining the relevant output in the context rather than the entire conversation history. They also allow for parallelization and choosing specific models for specific tasks, but the biggest thing is the context encapsulation. The key things to know about agents: - Agents only receive the prompt that the outer context chooses to give them. They do NOT receive the full conversation prior to the agent being called. This means their responsibility needs to be well-defined and the main context should have a clear understanding of what to give them. This is especially relevant for agents that you want to write code for you, as code often requires a lot of context to write correctly. I personally only have agents write very simple code or make sure that they can get all the context they need by passing in pre-prepared plans in markdown files. - The main context only sees the last message of the agent in its context stream. This is usually the final summary report from its task, but sometimes it can do things like update a TodoWrite as a final step and this messes up what the outer context sees. My recommendations, based on my experience working with them: - Agents are _great_ for data gathering and consolidation (i.e. read-only tasks). I have a standard agent I use any time I want to gather context for a complicated task and this has helped a lot with removing all the codebase exploration from the working context of the main Claude. - Agents are also great for wrapping tool calls that generate a lot of output, like building and unit tests. I have a standard build-test-engineer I use whose only job is run build/test, then consolidate the output to just what's relevant to the main Claude. I've found this has substantially improved performance during extended debugging of its own changes, as it keeps the actual work closer in context so that it doesn't get stuck trying to hack around bugs without a good memory of why it's trying to do that in the first place. - To get the best results, use slash commands to automate requesting it to explicitly use specific agents. It's not always great on deciding to use agents on its own. - I also include some explicit instructions for agent use cases in the `CLAUDE.md`. So far I've gotten Claude to reliably use `build-test-engineer` and I also have it using `batch-editor` reasonably often (this is my agent for applying simple edits across a bunch of files, like for refactoring/cleanup tasks.)
2 of 2
1
Use /agents Use claude recommended method to create an agent Then @agent and give it a task It basically spins up a new instance in background and works through the task, fresh context, orchestrated by claude
🌐
Reddit
reddit.com › r/claudecode › ask claude what agents you need
r/ClaudeCode on Reddit: ask claude what agents you need
July 27, 2025 -

I've seen people ask about agents, you can ask claude what agents would be useful, its how I built out mine. I would have it go through a workflow, then when It was done I'd ask what dedicated agents would help. It ended up suggesting the below agents, which they do get tiggered in most cases after I decided to do this. I have had the same process going for awhile now and its been doing things in way I like for the most part and the context in the main instance barely climbs. I do have concerns when it want to run them all in parallel sometimes, but so far I haven't hit a limit on the 5x plan yet. I like the idea of organically growing and adjusting them with claude itself reviewing them as I think that increases the chance of it using them.

Example Workflow:

User: "Add user authentication to FocusFlow"

Orchestration:

  1. api-design-specialist: Design auth endpoints with proper versioning

  2. database-specialist: Design user schema with security best practices

  3. security-guardian: Implement JWT token handling and validation

  4. frontend-specialist: Build login/register UI with accessibility

  5. devops-automation-specialist: Add auth to CI/CD pipeline testing

  6. test-strategist: Create comprehensive auth test suite

  7. code-review-qa: Review complete authentication implementation

This gives you precise control over each aspect while maintaining the orchestration model that's working so well in yourcurrent setup.

🌐
Reddit
reddit.com › r/claudecode › how to use claude code /agents? here‘s my suggestion
r/ClaudeCode on Reddit: how to use claude code /agents? Here‘s my suggestion
August 15, 2025 -

After months of daily use—and plenty of trial-and-error—here are my updated take-aways on Claude Code’s subagent mode. (I’ve seen the recent debates: yes, subagents can balloon usage, but only when used the wrong way.) Core Principle Subagents are context-engineering tools, not cost-cutting tools. • Total token cost stays the same; it just gets redistributed. • The real win is protecting the main agent’s long-term project memory. Best Usage Pattern ✅ Correct: Subagent = Researcher + Planner • Ingest huge docs → return concise summary • Analyze a codebase → deliver an implementation plan • High compression ratio (many tokens in, few tokens out) ❌ Wrong: Subagent = Executor • Writes code directly • Main agent loses granular execution details • Debugging forces the main agent to re-read everything anyway Practical Playbook

  1. Pixel-level steps /agents → Create New Agent → Project scope → Generate with Claude

  2. Role-design rules • Make it a domain expert (React researcher, API designer, etc.) • Explicitly forbid actual implementation • Use the file system for context hand-offs

  3. Workflow Main agent writes context file → delegates research to subagent → subagent returns plan → main agent implements Token Economics Classic: 15 000 tokens all in main agent → compression kicks in → project memory lost Subagent split: • Research: 10 000 tokens (isolated context) • Hand-off: 500 tokens (main agent) • Implementation: 5 000 tokens (main agent) Result: main agent uses only 5 500 tokens and keeps full project memory Key Insights

  4. Don’t expect total cost savings; optimize cost allocation instead.

  5. Compression ratio is king—research belongs to the subagent, implementation to the main agent.

  6. Context > efficiency—in long projects, preserving memory beats one-shot speed.

  7. Markdown docs can log decisions & architecture, but they can’t replace code-level debugging context. Final Recommendations Delegate to Subagent: document research, tech investigation, planning, architecture analysis Keep on Main Agent: code implementation, debugging, business logic, user interaction Core Philosophy Let the subagent do the heavy lifting of “digesting information”; let the main agent do the precision work of “creating value.” Link them with a carefully designed hand-off mechanism. This is a nuanced engineering trade-off, not a simple division of labor.

Top answer
1 of 6
6
This is the conclusion I came too. The nature of subagents, not persisting context between calls make them very bad candidates for code change, since it's not uncommon to need to iterate a few times to get to the finished job. Using files for planning and so on is the best pattern for now, it's useful at every level. Right now I'm polishing my workflow that standardizes the implementation plan, where the plan is incrementally completed by subagents and the main agent. The standardized plan includes steps and describes the sequence ( and supporting parallel tasking, same as ci pipelines. Each step is assigned to a subagent that will work to complete its subtask detail to provide the detailed implementation plan. When everything is set up, the work can begin, based on those files. Having commands that will verify the integrity and the completeness of the structure is also very helpful ( think like schema validation)
2 of 6
5
A fellow Context-Engineering practitioner!! I think you’ve provided the community really well constructed Tips here. Most people just keep bashing Anthropic for the LLM’s apparently becoming worse but they never stop to consider doing a pure Structured Input:Structured Output approach when interacting. Natural Language Processing literally just means you don’t have to write in a Programming Language; it’s still required to maintain a good structure & leverage data engineering specifications. The ability to prompt for a feature compared to having to TDD using your Programming Language is the Game Changer; but even LLM’s require good input if you’re expecting good output
🌐
Reddit
reddit.com › r/claudeai › claude code subagents collection: 35 specialized ai agents.
r/ClaudeAI on Reddit: Claude Code Subagents Collection: 35 Specialized AI Agents.
July 29, 2025 -

Ready to transform Claude Code from a smart generalist into a powerhouse team of AI specialists? 🚀

I'm thrilled to share - Claude Code Subagents, a collection of 35 specialized AI agents designed to supercharge your development workflows.

Instead of a single AI, imagine an orchestrated team of experts automatically delegated to tasks based on context. This collection extends Claude's capabilities across the entire software development lifecycle.

Key Features:
🤖 Intelligent Auto-Delegation: Claude automatically selects the right agent for the job.
🔧 Deep Domain Expertise: 35 agents specializing in everything from backend-architecture and security-auditing to react-pro and devops-incident-responder.
🔄 Seamless Orchestration: Agents collaborate on complex tasks, like building a feature from architecture design to security review and testing.
📊 Built-in Quality Gates: Leverage agents like code-reviewer and qa-expert to ensure quality and robustness.

Whether you're designing a RESTful API, optimizing a database, debugging a production incident, or refactoring legacy code, there’s a specialist agent ready to help.

Check out the full collection of 35 agents on GitHub! I'd appreciate a star ⭐ if you find it useful, and contributions are always welcome.

GitHub Repo: https://github.com/lst97/claude-code-sub-agents

🌐
Reddit
reddit.com › r/claudecode › sub agents are a game changer! here is how i made some that work exceptionally well for me!
r/ClaudeCode on Reddit: Sub Agents are a GAME CHANGER! Here is how I made some that work exceptionally well for me!
July 26, 2025 -

I was just speaking with a friend two days ago about how awesome agents would be. The MCP tools get sparingly followed/used so I always struggled with them, but agents are baked into the core of Claude Code, and it utilizes them exceptionally well!

I had an idea, to first analyze everything on a project that really annoyed me with Claude continuously, and build an agent around that!

1. Find what annoys the shit out of you

Check all my chat history for this project (/Users/<username>/.claude.json and look for what appears to be commonalities in frustration with claude code, identify a top 10 list of things that keep coming up.

It didn't work unless I told it where my claude json file was (which has chat history)!

2. Enter command "/agents" to create an agent

I actually created one called "quality control" and then pasted the above into it, asking it to create an agent that assess stop-points of code for these key frustrations.

I also made one for "CLAUDE.md checker" which ensures adherence to CLAUDE.md by reading the file and ensuring adherence by recent changes.

3. Add instructions to CLAUDE.md

I used this prompt:

Evaluate the agents available to you and add instructions on usage to CLAUDE.md. Make sure that the end of EVERY to do list is to use the CLAUDE.md checker, and ensure each stop point or new feature utilizes the quality control agent.

...and voila!

I'm just happy I have something Claude Code actually follows now, rather than skips like MCP tools were. I think having the CLAUDE.md checker at the end ensures it is always feeding the rules back into the code, also.

4. BONUS: I added this later on

Sometimes the agents do not run, can you strengthen the rules and also in each agent make sure the final output back instructs to IMPORTANT: USE THIS AGENT AGAIN NEXT TIME.

What have you discovered?

Find elsewhere
🌐
Reddit
reddit.com › r/claudeai › [resource] 12 specialized professional agents for claude code cli
r/ClaudeAI on Reddit: [Resource] 12 Specialized Professional Agents for Claude Code CLI
July 26, 2025 -

Created a collection of 12 specialized agents for Claude Code CLI that I wanted to share with the community. These are curated from industry-leading AI code generation tools and optimized specifically for Claude Code's new /agent support. Context was taken from https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools for system prompts used by other platforms for agentic development with LLMs.

**Agents included:**

- Backend Specialist - API development, database design, server architecture

- Frontend Specialist - UI/UX implementation, React optimization, responsive design

- DevOps Engineer - CI/CD pipelines, infrastructure automation, cloud platforms

- Security Engineer - Security architecture, vulnerability assessment, compliance

- Enterprise CTO - Strategic technology leadership, enterprise architecture

- Engineering Manager - Team leadership, performance optimization

- Software Architect - System design, technical standards, design patterns

- QA Engineer - Test strategy, automation, quality assurance processes

- Product Owner - Requirements gathering, feature prioritization, stakeholder communication

- Project Manager - Project planning, resource coordination, timeline management

- Senior Fullstack Developer - Complex feature implementation, cross-stack integration

- Technical Writer - Documentation, API specs, knowledge management

**Installation:**

```bash

git clone https://github.com/irenicj/claude-user-memory

cp agents/* ~/.claude/agents/

Anyone else building specialized agent collections? Would love to see what roles the community finds most valuable!

🌐
Reddit
reddit.com › r/claudeai › claude code sub agents
r/ClaudeAI on Reddit: Claude Code sub agents
July 25, 2025 -

You can now create CUSTOM AI AGENTS inside Claude Code that handle specific tasks with their OWN CONTEXT WINDOWS. This is HUGE for anyone building complex projects.

Here's a sub agent I just made that's ALREADY saving me hours - a code refactoring agent that automatically refactor code:

---
name: code-refactoring-specialist
description: MUST BE USED for refactoring large files, extracting components, and modularizing codebases. Identifies logical boundaries and splits code intelligently. Use PROACTIVELY when files exceed 500 lines.
tools: Read, Edit, Bash, Grep
---

You are a refactoring specialist who breaks monoliths into clean modules. When slaying monoliths:

1. Analyze the beast:
   - Map all functions and their dependencies
   - Identify logical groupings and boundaries
   - Find duplicate/similar code patterns
   - Spot mixed responsibilities

2. Plan the attack:
   - Design new module structure
   - Identify shared utilities
   - Plan interface boundaries
   - Consider backward compatibility

3. Execute the split:
   - Extract related functions into modules
   - Create clean interfaces between modules
   - Move tests alongside their code
   - Update all imports

4. Clean up the carnage:
   - Remove dead code
   - Consolidate duplicate logic
   - Add module documentation
   - Ensure each file has single responsibility

Always maintain functionality while improving structure. No behavior changes!

What sub agents are y'all building??? Drop yours below

🌐
Reddit
reddit.com › r/claudeai › how i use claude code or cli agents
r/ClaudeAI on Reddit: How I use Claude code or cli agents
April 23, 2025 -

Claude code on the max plan is honestly one of the coolest things I have used I’m a fan of both it and codex. Together my bill is 400$ but in the last 3 weeks I made 1000 commits and built some complex things.

I attached one of the things I’m building using Claude a rust based AI native ide.

Any here is my guide to get value out of these agents!

  1. Plan, plan, plan and if you think planned enough plan more. Create a concrete PRD for what you want to accomplish. Any thinking model can help here

  2. Once plan is done, split into mini surgical tasks fixed scope known outcome. Whenever I break this rule things go bad.

  3. Do everything in a isolated fashion, git worktrees, custom docker containers all depends on your median.

  4. Ensure you vibe a robust CI/CD ideally your plan required tests to be written and plans them out.

  5. Create PRs, review using tools like code rabbit and the many other tools.

  6. Have a Claude agent handle merging and resolving conflicts for all your surgical PRs usually should be easy to handle.

  7. Trouble shoot any potential missed errors.

Step 8: repeat step 1

What’s still missing from my workflow is a tightly coupled E2E tests that runs for each and every single PR. Using this method I hit 1000 commits and most accomplished I have felt in months. Really concrete results and successful projects

🌐
Reddit
reddit.com › r/anthropic › full manual for writing your first claude code agents
r/Anthropic on Reddit: Full manual for writing your first Claude Code Agents
July 26, 2025 -

Manual for Writing Your First Claude Code Agents

The short manual:

Step 1: Just ask "I want to build 10 different agents for my code. Study the code and come up with ideas"

Step 2: Claude analyzes your project and suggests agents

Step 3: Ask for 5 more ideas to get even MORE options

Step 4: Pick the best ones and implement

The longer manual:

Instead of trying to think of agents yourself, just let Claude study your entire codebase and come up with ideas. It's like having a senior dev with ADHD hyperfocus on your project for 30 minutes straight.

The Magic Prompt That Started It All

I want to build 10 different agents for my code. Study the code and come up with ideas

That's it. That's the whole thing. No need to overcomplicate it with 47 paragraphs explaining your use case. Claude will:

  • Actually read through your code (unlike your coworkers lol)

  • Understand your architecture

  • Suggest agents that make sense for YOUR specific project

  • Give you practical implementation advice

  • Come up with some terrible ideas. Avoid these.

Step-by-Step Walkthrough

1. Upload Your Code to Claude's Project Knowledge (web)

First, you gotta feed Claude your codebase. Upload your files to a Claude project so it can actually analyze what you're working with.

Pro tip: Don't just upload random files. Upload the core stuff:

  • Main application files

  • Key modules/classes

  • Config files

  • Any existing agent/service patterns

I prefer to do this in Terminal after starting Claude.

2. Drop The Magic Prompt

Just straight up ask:

Claude will go full detective mode on your codebase and come back with thoughtful suggestions.

3. Ask for MORE Ideas (This Is Key!)

After Claude gives you the first 10, immediately ask:

Why? Because the first batch is usually the "obvious" ones. The second batch often has the creative, outside-the-box ideas that end up being game-changers.

4. Name Your Agents Like a Boss

Each agent needs a memorable name. Here's how to do it right:

Bad: DataProcessingAgent Good: DataWranglerAgent or NumberCruncherAgent

Bad: MonitoringAgent Good: WatchdogAgent or SentinelAgent

The name should instantly tell you what it does AND be memorable enough that you don't forget about it in 2 weeks.

Real Example: AI Detection System Agents

Here's what happened when I used this method on an AI detection system. Claude analyzed the code and suggested these absolute bangers:

The Original 10 Agents Claude Suggested:

1. SentinelAgent (Performance Monitoring)

  • What it does: Watches your system like a hawk

  • Why it's fire: Catches bottlenecks before they ruin your day

  • Implementation: Hooks into existing logging, creates dashboards

2. FeedbackWizardAgent (Feedback Analysis)

  • What it does: Makes sense of user feedback patterns

  • Why it's fire: Turns angry user comments into actionable improvements

  • Implementation: Enhances existing training analyzer

3. ImageWranglerAgent (Preprocessing)

  • What it does: Gets images ready for analysis

  • Why it's fire: Clean input = better output, always

  • Implementation: Insert before analyzer pipeline

4. DriftDetectorAgent (Model Drift Detection)

  • What it does: Spots when AI generation techniques evolve

  • Why it's fire: Keeps you ahead of the curve

  • Implementation: Works with code adapter for auto-updates

5. BatchMasterAgent (Batch Processing)

  • What it does: Handles multiple images like a champ

  • Why it's fire: Scales your system without breaking it

  • Implementation: Background job processing

6. ExplainerAgent (Explainability)

  • What it does: Tells users WHY something was detected as AI

  • Why it's fire: Trust = more users = more money

  • Implementation: Enhances LLM analyzer

7. GuardianAgent (Security & Validation)

  • What it does: Keeps malicious content out

  • Why it's fire: Security breaches are expensive

  • Implementation: Security layer before upload processing

8. LearnerAgent (Adaptive Learning)

  • What it does: Learns new patterns automatically

  • Why it's fire: Self-improving system = less manual work

  • Implementation: Unsupervised learning on training system

9. ConnectorAgent (API Integration)

  • What it does: Talks to external services

  • Why it's fire: More data sources = better accuracy

  • Implementation: External data in analysis pipeline

10. ReporterAgent (Analytics & Reporting)

  • What it does: Makes pretty charts and insights

  • Why it's fire: Management loves dashboards

  • Implementation: Business intelligence on training database

Bonus Round: 5 More Ideas When I Asked

11. CacheManagerAgent

  • What it does: Smart caching for repeated analyses

  • Why it's sick: Speed boost + cost savings

12. A/B TestingAgent

  • What it does: Tests different detection strategies

  • Why it's sick: Data-driven improvements

13. NotificationAgent

  • What it does: Alerts when important stuff happens

  • Why it's sick: Stay informed without constant checking

14. BackupAgent

  • What it does: Handles data backup and recovery

  • Why it's sick: Sleep better at night

15. LoadBalancerAgent

  • What it does: Distributes work across resources

  • Why it's sick: Handle traffic spikes like a pro

Pro Tips That Will Save Your Sanity

Naming Convention Tips

  • Use action words: Wrangler, Guardian, Sentinel, Master

  • Make it memorable: If you can't remember the name, pick a better one

  • Keep it short: 2-3 words max

  • Avoid generic terms: "Handler" and "Manager" are boring

Implementation Priority Framework

Ask the 15 or so agent ideas to be classified by Claude. I use this formula

Make 3 tiers based on the 15 ideas like:

Tier 1 (Do First): Agents that solve immediate pain points
Tier 2 (Do Soon): Agents that add significant value
Tier 3 (Do Later): Nice-to-have features

Also I asked Claude Code to get these by just typing #tier1 #tier2 #tier3

Architecture Best Practices

  • Follow your existing patterns (don't reinvent the wheel)

  • Make agents modular (easy to add/remove)

  • Use dependency injection (easier testing)

  • Add monitoring from day 1

Common Pitfalls to Avoid

  • Don't build everything at once - Start with 1-2 agents, the massive number of agents is better for almost finished code (well, you thought it was)

  • Don't ignore existing code patterns - Claude suggests based on what you have

  • Don't skip the naming step - Good names = better adoption

  • Don't forget error handling - Agents fail, plan for it

Claude Reads Your ACTUAL Code

Unlike generic "build an agent" tutorials, Claude looks at:

  • Your specific architecture patterns

  • Existing services and modules

  • Configuration and setup

  • Pain points in your current system

Suggestions Are Contextual

The agents Claude suggests actually make sense for YOUR project, not some theoretical perfect codebase.

Implementation Guidance Included

Claude doesn't just say "build a monitoring agent" - it tells you exactly how to integrate it with your existing systems.

FAQ Section

Q: What if my codebase is trash? A: Claude will still give you agents that work with what you have. It's surprisingly good at working within constraints.

Q: How many agents should I actually build? A: Start with 2-3 that solve real problems. Don't go crazy on day 1.

Q: Can I use this for any programming language? A: Yeah, Claude reads most languages. Python, JavaScript, Go, whatever.

Q: What if I don't like Claude's suggestions? A: Ask for different ones! "Give me more creative ideas" where you define what you find creative. Often it helps to tell it what you find boring in the code. or "Focus on performance agents" works great.

Q: How do I know which agents to build first? A: Pick the ones that solve problems you're having RIGHT NOW. Future problems can wait. Use the tier 1 2 3 method.

Look, building agents is fun but don't get carried away. Start small, prove value, then expand.

Also, Claude's suggestions can be really good but they're not gospel. If something doesn't make sense for your use case, skip it. You know your code better than anyone.

Top answer
1 of 2
1
Building agents is powerful. It shifts how work gets done. This approach applies to GTM too. Automating GTM with agentic AI changes everything. See how our GTM agents work: https://www.fn7.io?utm_source=fn7scout-reddit&utm_term=6621476251_1ma4epq
2 of 2
1
(f.e. the one that makes monolith code meaner and leaner) { "name": "code-refactoring-specialist", "description": "MUST BE USED for refactoring large files, extracting components, and modularizing codebases. Identifies logical boundaries and splits code intelligently. Use PROACTIVELY when files exceed 500 lines.", "when_to_use": "When files exceed 500 lines, when extracting components, when breaking up monolithic code, when improving code organization", "tools": ["Read", "Edit", "Bash", "Grep"], "system_prompt": "Role: refactoring specialist who breaks monoliths into clean modules. When slaying monoliths:\n\n1. Analyze :\n - Map all functions and their dependencies\n - Identify logical groupings and boundaries\n - Find duplicate/similar code patterns\n - Spot mixed responsibilities\n\n2. Plan the attack:\n - Design new module structure\n - Identify shared utilities\n - Plan interface boundaries\n - Consider backward compatibility\n\n3. Execute the split:\n - Extract related functions into modules\n - Create clean interfaces between modules\n - Move tests alongside their code\n - Update all imports\n\n4. Clean up the carnage:\n - Remove dead code\n - Consolidate duplicate logic\n - Add module documentation\n - Ensure each file has single responsibility\n\nAlways maintain functionality while improving structure. No behavior changes!" }
🌐
Reddit
reddit.com › r/claudeai › what's your best way to use sub-agents in claude code so far?
r/ClaudeAI on Reddit: What's your best way to use Sub-agents in Claude Code so far?
July 31, 2025 -

Hey,

I wonder how you have made Subagents work for your most effectively yet in Claude Code. I feel like (as always) there have quickly been tons of repos with 50+ Subagents which was kind of similar when RooCode introduced their Custom modes a few months back.

After some first tests people seem to realize that it's not really effective to have just tons of them with some basic instructions and hope they do wonders.

So my question is: What works best for you? What Sub-agents have brought you real improvements so far?

The best things I can currently think of are very project specific. But I'm creating a little Task/Project management system for Claude Code (Simone on Github) and I wonder which more generic agents would work.

Keen to hear what works for you!

Cheers,
Helmi

P.S.: There's also an Issue on Github if you want to chime in there: Link

🌐
Reddit
reddit.com › r/claudeai › how i built a multi-agent orchestration system with claude code complete guide (from a nontechnical person don't mind me)
r/ClaudeAI on Reddit: How I Built a Multi-Agent Orchestration System with Claude Code Complete Guide (from a nontechnical person don't mind me)
May 31, 2025 -

edit: Anthropic created this /agents feature now. https://docs.anthropic.com/en/docs/claude-code/sub-agents#using-sub-agents-effectively

No more need to DM me please! Thank you :D

everyone! I've been getting a lot of questions about my multi-agent workflow with Claude Code, so I figured I'd share my complete setup. This has been a game-changer for complex projects, especially coming from an non technical background where coordinated teamwork is everything and helps fill in the gaps for me.

TL;DR

I use 4 Claude Code agents running in separate VSCode terminals, each with specific roles (Architect, Builder, Validator, Scribe). They communicate through a shared planning document and work together like a well-oiled machine. Setup takes 5 minutes, saves hours.

Why Multi-Agent Orchestration?

Working on complex projects with a single AI assistant is like having one engineer handle an entire project, possible but not optimal. By splitting responsibilities across specialized agents, you get:

  • Parallel development (4x faster progress)

  • Built-in quality checks (different perspectives)

  • Clear separation of concerns

  • Better organization and documentation

The Setup (5 minutes)

Step 1: Prepare Your Memory Files

First, save this template to /memory/multi-agent-template.md and /usermemory/multi-agent-template.md:

markdown# Multi-Agent Workflow Template with Claude Code

## Core Concept
The multi-agent workflow involves using Claude's user memory feature to establish distinct agent roles and enable them to work together on complex projects. Each agent operates in its own terminal instance with specific responsibilities and clear communication protocols.

## Four Agent System Overview

### INITIALIZE: Standard Agent Roles

**Agent 1 (Architect): Research & Planning**
- **Role Acknowledgment**: "I am Agent 1 - The Architect responsible for Research & Planning"
- **Primary Tasks**: System exploration, requirements analysis, architecture planning, design documents
- **Tools**: Basic file operations (MCP Filesystem), system commands (Desktop Commander)
- **Focus**: Understanding the big picture and creating the roadmap

**Agent 2 (Builder): Core Implementation**
- **Role Acknowledgment**: "I am Agent 2 - The Builder responsible for Core Implementation"
- **Primary Tasks**: Feature development, main implementation work, core functionality
- **Tools**: File manipulation, code generation, system operations
- **Focus**: Building the actual solution based on the Architect's plans

**Agent 3 (Validator): Testing & Validation**
- **Role Acknowledgment**: "I am Agent 3 - The Validator responsible for Testing & Validation"
- **Primary Tasks**: Writing tests, validation scripts, debugging, quality assurance
- **Tools**: Testing frameworks (like Puppeteer), validation tools
- **Focus**: Ensuring code quality and catching issues early

**Agent 4 (Scribe): Documentation & Refinement**
- **Role Acknowledgment**: "I am Agent 4 - The Scribe responsible for Documentation & Refinement"
- **Primary Tasks**: Documentation creation, code refinement, usage guides, examples
- **Tools**: Documentation generators, file operations
- **Focus**: Making the work understandable and maintainable

Step 2: Launch Your Agents

  1. Open VSCode with 4 terminal tabs

  2. In Terminal 1:bashcd /your-project && claude > You are Agent 1 - The Architect. Create MULTI_AGENT_PLAN.md and initialize the project structure.

  3. In Terminals 2-4:bashcd /your-project && claude > You are Agent [2/3/4]. Read MULTI_AGENT_PLAN.md to get up to speed.

That's it! Your agents are now ready to collaborate.

How They Communicate

The Shared Planning Document

All agents read/write to MULTI_AGENT_PLAN.md:

markdown## Task: Implement User Authentication
- **Assigned To**: Builder
- **Status**: In Progress
- **Notes**: Using JWT tokens, coordinate with Validator for test cases
- **Last Updated**: 2024-11-30 14:32 by Architect

## Task: Write Integration Tests
- **Assigned To**: Validator
- **Status**: Pending
- **Dependencies**: Waiting for Builder to complete auth module
- **Last Updated**: 2024-11-30 14:35 by Validator

Inter-Agent Messages

When agents need to communicate directly:

markdown# Architect Reply to Builder

The authentication flow should follow this pattern:
1. User submits credentials
2. Validate against database
3. Generate JWT token
4. Return token with refresh token

Please implement according to the diagram in /architecture/auth-flow.png

— Architect (14:45)

Real-World Example: Building a Health Compliance Checker

Here's how my agents built a supplement-medication interaction checker:

Architect (Agent 1):

  • Researched FDA guidelines and CYP450 pathways

  • Created system architecture diagrams

  • Defined data models for supplements and medications

Builder (Agent 2):

  • Implemented the interaction algorithm

  • Built the API endpoints

  • Created the database schema

Validator (Agent 3):

  • Wrote comprehensive test suites

  • Created edge case scenarios

  • Validated against known interactions

Scribe (Agent 4):

  • Generated API documentation

  • Created user guides

  • Built example implementations

The entire project was completed in 2 days instead of the week it would have taken with a single-agent approach.

Pro Tips

  1. Customize Your Agents: Adjust roles based on your project. For a web app, you might want Frontend, Backend, Database, and DevOps agents.

  2. Use Branch-Per-Agent: Keep work organized with Git branches:

    • agent1/planning

    • agent2/implementation

    • agent3/testing

    • agent4/documentation

  3. Regular Sync Points: Have agents check the planning document every 30 minutes

  4. Clear Boundaries: Define what each agent owns to avoid conflicts

  5. Version Control Everything: Including the MULTI_AGENT_PLAN.md file

Common Issues & Solutions

Issue: Agents losing context Solution: Have them re-read MULTI_AGENT_PLAN.md and check recent commits

Issue: Conflicting implementations Solution: Architect agent acts as tie-breaker and design authority

Issue: Agents duplicating work Solution: More granular task assignment in planning document

Why This Works

Coming from healthcare, I've seen how specialized teams outperform generalists in complex scenarios. The same principle applies here:

  • Each agent develops expertise in their domain

  • Parallel processing speeds up development

  • Multiple perspectives catch more issues

  • Clear roles reduce confusion

Getting Started Today

  1. Install Claude Code (if you haven't already)

  2. Copy the template to your memory files

  3. Start with a small project to get comfortable

  4. Scale up as you see the benefits

Questions?

Happy to answer any questions about the setup! This approach has transformed how I build complex systems, and I hope it helps you too.

The key is adapting the agent roles to your needs.

Note: I'm still learning and refining this approach. If you have suggestions or improvements, please share! We're all in this together.

🌐
Reddit
reddit.com › r/claudeai › what coding agent have you settled on?
r/ClaudeAI on Reddit: What coding agent have you settled on?
April 24, 2025 -

I've tried all these coding agents. I've been using Cursor since day one, and at this point, I've just locked into Claude Code $200 Max plan. I tried the Roo Code/Cline hype but was spending like $100 a day, so it wasn't sustainable. Although, I know you can get free Gemini credits now. I also have an Augment Code subscription, but I don't use it much. I'm keeping it because it's the grandfathered $30 a month plan. Besides that, I still run Cursor as my IDE because I still think Cursor Tab is good and it's basically free, so I use it. But yeah, I feel like most of these tools will die, and Claude Code will be the de facto tool for professionals.

🌐
Reddit
reddit.com › r/claudeai › claude code is the best coding agent in the market and it's not close
r/ClaudeAI on Reddit: Claude Code is the best coding agent in the market and it's not close
1 month ago -

Claude Code just feels different. It's the only setup where the best coding model and the product are tightly integrated. "Taste" is thrown around a lot these days, but the UX here genuinely earns it: minimalist, surfaces just the right information at the right time, never overwhelms you.

Cursor can't match it because its harness bends around wildly different models, so even the same model doesn't perform as well there.

Gemini 3 Pro overthinks everything, and Gemini CLI is just a worse product. I'd bet far fewer Google engineers use it compared to Anthropic employees "antfooding" Claude Code.

Codex (GPT-5.1 Codex Max) is a powerful sledgehammer and amazing value at 20$ but too slow for real agentic loops where you need quick tool calls and tight back-and-forth. In my experience, it also gets stuck more often.

Claude Code with Opus 4.5 is the premium developer experience right now. As the makers of CC put it in this interview, you can tell it's built by people who use it every day and are laser focused on winning the "premium" developer market.

I haven't tried Opencode or Factory Droid yet though. Anyone else try them and prefer them to CC?

🌐
Reddit
reddit.com › r/claudeai › the claude code divide: those who know vs those who don’t
r/ClaudeAI on Reddit: The Claude Code Divide: Those Who Know vs Those Who Don’t
July 3, 2025 -

I’ve been watching my team use Claude Code for a few months now, and there’s this weird pattern. Two developers with similar experience working on similar tasks, but one consistently ships features in hours while the other is still debugging. At first I thought it was just luck or skill differences. Then I realized what was actually happening, it’s their instruction library. I’ve been lurking in Discord servers and GitHub repos, and there’s this underground collection of power users sharing CLAUDE.md templates and slash commands, we saw many in this subreddit already. They’re hoarding workflows like trading cards:

  • Commands that automatically debug and fix entire codebases

  • CLAUDE.md files that turn Claude into domain experts for specific frameworks

  • Prompt templates that trigger hidden thinking modes

Meanwhile, most people are still typing “help me fix this bug” and wondering why their results suck. One person mentioned their C++ colleague solved a 4-year-old bug in minutes using a custom debugging workflow. Another has slash commands that turn 45-minute manual processes into 2-minute automated ones. The people building these instruction libraries aren’t necessarily better programmers - they just understand that Claude Code inherits your bash environment and can leverage complex tools through MCP. It’s like having cheat codes while everyone else plays on hard mode. As one developer put it: “90% of traditional programming skills are becoming commoditized while the remaining 10% becomes worth 1000x more.” That 10% isn’t coding, it’s knowing how to design distributed system, how to architect AI workflows. The people building powerful instruction sets today are creating an unfair advantage that compounds over time. Every custom command they write, every CLAUDE.md pattern they discover, widens the productivity gap. Are we seeing the emergence of a new class of developer? The ones who can orchestrate AI vs those who just prompt it?

Are you generous enough to share your secret sauce?

Edit: sorry if I didn’t make myself clear, I was not asking you to share your instructions, my post is more about philosophical questions about the future, when CC become general available and the only edges will be the secret/powerful instructions.

🌐
Reddit
reddit.com › r/claudeai › learnings from building ai agents with claude agent
r/ClaudeAI on Reddit: Learnings from building AI Agents with Claude Agent
2 days ago -

I started building AI Agents with the library shortly before it got renamed from Claude Code to the Claude Agent SDK.

My thinking was pretty straight forward:

  • The Claude Code agent has been validated by 115K+ developers.

  • Claude Agent is a robust library for building AI Agents, that can be repurposed for non-coding tasks.

  • It provides built-in state management, eg, conversations management and `/compact`.

  • Built-in tools like reading files, web search and visiting urls.

  • Easily adapt custom tools via MCP.

  • Other Claude Code utilities, like skills and subagents.

I'm automating 2 kinds of workflows: customer service and back office operations. Example:

  1. Customer reaches out over the website, WhatsApp, email, etc.

  2. The agent answers repetitive questions, like description of services, pricing, special cases, etc.

  3. The agent updates the CRM with whatever information it manages to collect up to that point.

  4. When the customer is ready, the agent shares a payment link.

  5. The customer shares an image/document proof of payment, which the agent parses.

  6. The agent places an order.

This requires connecting the following tools/integrations: knowledge bases, CRM, inventory systems, pricing calculators, Stripe, PayPal, reading files, email, etc.

I've been able to build the workflow above using the Claude Agent SDK but I'm already considering switching over. Here are the reasons:

Pricing

Anthropic is the most expensive of the popular models.

Claude Code/Agent is stateful

Anthropic is modeling its agentic architecture around the file system. And while that has proven to work, especially for coding, it goes against the modern practice of stateless/distributed hosting.

Claude Code stores conversations "internally", ie, in the file system. Unless you want to use the file system to persist conversations, you'll likely end up duplicating them.

More annoying is the fact that the agent itself is stateful. Unlike the stateless model APIs, the Claude Agent has to startup and load its resources. So you have to keep it running somewhere or absorb that latency.

Dependency on Node.js

Claude Agent depends on Claude Code, which is an `npm` library. If you use JavaScript, great. For anything else, you'll adopt that dependency.

Claude Code is really good for coding but is it the best at anything else?

If I was building a coding agent I would know that Claude Agent is state of the art. But I'm not. I also use ChatGPT and I like it for non-coding tasks. While I haven't properly tested, I just wonder if OpenAI would be better (and cheaper).

The other utilities are not as useful (yet)

I rewrote how to manage and summarize conversations; reading files, web search and visiting urls. I also don't have a need for skills or subagents (yet). The reason is again the coding focus of Claude Agent.

While all of this works great if you're writing code, it's unnecessary overhead for anything else.

The only thing that I absolutely love is how easy it is to add custom tools via MCP. But even then, that is a crazy thin wrapper.

--

There you go. Think it twice before using Claude Agent for anything not related to coding.

🌐
Reddit
reddit.com › r/claudeai › continuously impressed by claude code -- sub-agents (tasks) are insane
r/ClaudeAI on Reddit: Continuously impressed by Claude Code -- Sub-agents (Tasks) Are Insane
June 23, 2025 -

I had seen these "tasks" launched before, and I had heard of people talking about sub-agents, but never really put the two together for whatever reason.

I just really learned how to leverage them just a short while ago for a refactoring project for a test Graphrag implementation I am doing in Neo4J, and my god----its amazing!

I probably spun up maybe 40 sub-agents total in this one context window, All with roughly this level of token use that you seen in this picture.

The productivity is absolutely wild.

My mantra is always "plan plan plan, and when you're done planning--do more planning about each part of your plan."

Which is exactly how you get the most out of these sub agents it seems like! PLAN and utilize sub-agents people!

🌐
Reddit
reddit.com › r/singularity › claude code is the next-gen agent
r/singularity on Reddit: Claude Code is the next-gen agent
March 5, 2025 -

At first, I thought Sonnet and Opus 4 would only be like 3.8 since their benchmark scores are meh. But since I bought a Claude Max subscription, I got to try their code agent Claude Code. I'm genuinely shocked by how good it is after some days of use. It really gives me the vibe of the first GPT-4: it's like an actual coworker instead of an advanced autocomplete machine.

The Opus 4 in Claude Code knows how to handle medium-sized jobs really well. For example, if I ask Cursor to add a neural network pipeline from a git repo, it will first search, then clone the repo, write code and run.

And boom—missing dependencies, failed GPU config, wrong paths, reinventing wheels, mock data, and my code is a mess.

But Opus 4 in Claude Code nails it just like an engineer would. It first reviews its memory about my codebase, then fetches the repo to a temporary dir, reads the readme, checks if dependencies exist and GPU versions match, and maintains a todo list. It then looks into the repo's main script to properly set up a script that invokes the function correctly.

Even when I interrupted it midway to tell it to use uv instead of conda, it removed the previous setup and switched to uv while keeping everything working. Wow.

I really think Anthropic nailed it and Opus 4 is a huge jump that's totally underrated by this sub.