What You Will Learn
- How Claude Code Agent Teams work and when to use them
- Concrete use cases for solo developers using Agent Teams
- Real examples of task lists and team configurations
- Differences between regular subagents and Agent Teams, and how to choose
The Wall of Solo Development
In Part 6, I wrote about developing a 220,000-line SaaS application alone using Claude Code. AI functions as “ten extra pairs of hands,” rapidly handling everything from specification drafting to implementation and testing.
However, the more I used Claude Code, the more one limitation became apparent.
Claude Code can only do one thing at a time in a single session.
In human team development, Person A implements the API while Person B writes tests and Person C does code review. But Claude Code can only work sequentially in a single session. Even merging 5 PRs had to be processed one at a time.
Agent Teams break this constraint.
What Are Agent Teams?
Agent Teams enable multiple Claude Code instances to work cooperatively as a single team. They were released as a Research Preview in February 2026.
Regular Claude Code:
1 session ──→ sequential processing
Agent Team:
Team Lead ──→ Teammate A ──→ parallel processing
├→ Teammate B ──→ parallel processing
└→ Teammate C ──→ parallel processing
The differences from regular subagents (the Task tool) are clear:
| Feature | Subagent (Task) | Agent Team |
|---|---|---|
| Communication | Reports to parent only | Members can talk directly to each other |
| Context | Shares parent’s context | Each has independent context |
| Task management | None | Shared task list (with dependencies) |
| Human intervention | Through parent only | Can instruct each member directly |
| Use case | Research, one-off delegation | Multi-track parallel work |
If subagents are “delegating work to a subordinate and getting results back,” Agent Teams feel more like “assembling a team and running a project.”
Example 1: PR Triage Team
Background
In Saru’s development, feature implementation and CI fixes proceed in parallel, so PRs tend to pile up. One day, I faced this situation:
- PRs awaiting merge: 5 (requiring rebase)
- Stale worktrees: 12+ (leftovers from merged PRs)
- PRs with CI failures: 2 (requiring investigation)
Processing these one by one would take an entire day. I used Agent Teams to clear them all at once.
Task List
The core of Agent Teams is the shared task list. Each task can have dependencies (blocks/blockedBy), allowing team members to autonomously pick up the next available task.
Here is the actual task list structure I used:
| |
The key point is to write clear completion criteria in each task’s description. Not just “rebase” but “rebase onto main, investigate CI failure, fix issues, push, request review.” Vague instructions lead to half-finished work from AI.
Result
Processing 5 PRs + cleaning up 12 worktrees completed without human intervention. All I did was give the initial instructions and press the merge button at the end.
Example 2: CI Stabilization Merge Team
Background
In the final stage of CI stabilization described in Part 7, a series of tasks with complex dependencies emerged:
- Merge PR #550 (E2E stability fix)
- Merge PR #547 (signup fix) — would conflict without merging #550 first
- Rebase 7 PR branches onto main — after #550 and #547 are merged
- Force push rebased branches — after rebase completes
- Verify CI runs triggered on all PRs — after push
- Clean up worktrees — after push
Order matters, and a single mistake cascades into further problems. When doing this manually, it is easy to lose track: “Wait, did I already merge #547?”
Team Configuration
| |
Task List with Dependencies
Task 1: Merge PR #550 ── blocks → [2, 3]
Task 2: Merge PR #547 ── blocks → [3], blockedBy → [1]
Task 3: Rebase 7 PR branches ── blocks → [4], blockedBy → [1, 2]
Task 4: Force push all ── blocks → [5, 6], blockedBy → [3]
Task 5: Verify CI triggers ── blockedBy → [4]
Task 6: Clean up worktrees ── blockedBy → [4]
Tasks 5 and 6 are independent and can run in parallel. Agent Teams see the dependencies and automatically pick up tasks as their blockers are resolved.
This DAG (Directed Acyclic Graph) structure is Agent Teams’ greatest weapon. No human needs to think about “what should I do next.”
Example 3: Multi-Layered Quality Verification
Separately from Agent Teams, I routinely use specialist agents at each stage of the development workflow.
Orchestration via CLAUDE.md
In Saru, quality verification workflows are defined in CLAUDE.md (the instruction file for Claude Code):
Specification → /qa.verify-spec → Spec verification
Design → /qa.verify-design → Design verification
Implementation → /qa.verify-impl → Implementation verification
Testing → /qa.verify-test → Test verification
Specialist agents are automatically launched at each verification step:
| Agent | Role | Trigger |
|---|---|---|
| security-engineer | Review RLS policies, authentication/authorization | Spec review, implementation verification |
| backend-architect | Review API design, data models | Design verification |
| quality-engineer | Check test coverage and comprehensiveness | Test verification |
Value for Solo Developers
In team development, code reviews are done by colleagues. In solo development, there is no one else.
“Writing code and reviewing it yourself” is difficult for humans. Code you just wrote looks correct to you. You are biased.
Specialist agents bring a different perspective. The security-engineer points out “this input is not validated,” and the backend-architect notes “this API design violates REST principles.”
This is a different kind of value from Agent Teams’ parallel execution, but it is an important component of the “alone yet a team” experience.
Example 4: Blog Article Pre-Publication Check
For this blog article and others, I run multiple verifications in parallel before publication:
Article draft complete
├→ security-engineer: Information leakage & security risk check
├→ Tavily search: Similar article search (plagiarism check)
└→ Codex: Technical accuracy verification
These are independent of each other, so they can be run as parallel subagent calls. There is no need for Agent Teams — parallel Task tool (subagent) calls are sufficient.
Decision Criteria for Choosing
After hands-on experience, I settled on these decision criteria:
| Condition | Choice |
|---|---|
| Independent research/analysis tasks | Subagent (Task) |
| Parallel work requiring coordination | Agent Team |
| Ordered task groups with dependencies | Agent Team (using task list) |
| One-off specialist review | Subagent |
| Large-scale refactoring | Agent Team (separate frontend/backend/test) |
When in doubt, start with subagents. Agent Teams have overhead (team creation, task list management, inter-member messaging). Using Agent Teams for simple tasks wastes more time on setup than it saves.
Caveats and Limitations
1. High Token Consumption
Agent Teams consume several times the usual number of tokens because each member has an independent context. A 3-person team uses roughly 3x the tokens. Cost awareness is necessary.
2. Context Fragmentation
The Team Lead’s conversation history is not inherited by members. All necessary context must be included in the initial instructions to each member. “Continuing from our earlier discussion” does not work.
3. File Conflict Risk
If multiple members edit the same file simultaneously, conflicts arise. You need to either isolate working directories with git worktrees or clearly divide file ownership. In Saru, worktrees are mandatory, so this problem is naturally avoided.
4. Experimental Feature
Agent Teams are in Research Preview as of February 2026. There are rough edges: session resumption is not supported, task status updates are sometimes forgotten, and so on. It is too early to integrate into production deployment pipelines.
Solo Development x Agent Teams = ?
In Part 6, I wrote that “AI provides ten extra pairs of hands, not ten extra brains.” With Agent Teams, those hands now move in parallel.
In the world of solo development, Agent Teams function not as replacements for team members but as a work parallelization tool.
- Process PR backlogs automatically with dependency tracking
- Run quality verification from multiple perspectives simultaneously
- Execute complex operations like CI stabilization in the correct order
“One-person team” may sound like a contradiction. But in practice, the most tedious parts of team development (communication, alignment, scheduling) disappear, leaving only the most valuable parts (parallel processing, specialist reviews, dependency management).
This may be the ideal form for solo developers.
Summary
| Point | Content |
|---|---|
| What are Agent Teams | A system where multiple Claude Code instances work cooperatively |
| Difference from subagents | Direct inter-member communication, shared task list, independent context |
| Solo development use cases | PR triage, CI stabilization, parallel quality verification |
| Decision criteria | Use Task for no coordination needed, Agent Team for dependencies |
| Caveats | Increased token cost, context fragmentation, Research Preview |
Series Articles
- Part 1: Fighting Unmaintainable Complexity with Automation
- Part 2: Automating WebAuthn Tests in CI
- Part 3: Next.js x Go Monorepo Architecture
- Part 4: Multi-Tenant Isolation with PostgreSQL RLS
- Part 5: Multi-Portal Authentication Pitfalls
- Part 6: Developing a 200K-Line SaaS Alone with Claude Code
- Part 7: Landmines and Solutions in Self-Hosted CI/CD
- Part 8: Turning Solo Development into Team Development with Claude Code Agent Teams (this article)