Multi-Agent Patterns
This guide covers techniques for building sophisticated workflows. Focus is on the patterns themselves so you can combine them to fit your needs.
Agent Variants
The standard agent loop (call LLM → execute tools → repeat) can be customized in several ways.
Custom Approval Logic
Control when tools require approval using mode and conditional edges:
edges:
- from: call_llm
cases:
- to: approval
condition: size(nodes.call_llm.tool_calls) > 0 && inputs.mode == 'manual'
- to: execute_tools
condition: size(nodes.call_llm.tool_calls) > 0 && inputs.mode != 'manual'When to use: Human oversight for certain operations, or letting users toggle between autonomous and supervised modes.
Context Management
Long-running agents accumulate large contexts. Two techniques help:
Compaction summarizes the conversation when it exceeds a threshold:
- id: compact
action: Compact
inputs:
thread: "{{thread.id}}"
edges:
- from: execute_tools
cases:
- to: compact
condition: nodes.execute_tools.thread_token_count > inputs.compaction_thresholdResult filtering uses a secondary LLM call to extract only relevant information from large tool results:
- id: filter_results
action: CallLLM
inputs:
tools: false
ephemeral: true
system_prompt: |
Extract only information relevant to completing the task.
Keep: file paths, line numbers, error messages, relevant code.
Remove: verbose output, boilerplate, irrelevant content.
messages:
- role: user
content: |
Tool calls: {{toJson(nodes.call_llm.tool_calls)}}
Results: {{toJson(nodes.execute_tools.tool_results)}}
edges:
- from: execute_tools
cases:
- to: filter_results
condition: nodes.execute_tools.total_result_chars > 4000When to use: Compaction for long-running agents. Result filtering when tools frequently return large outputs.
Oversight and Auditing
Add a secondary agent that reviews the primary agent’s actions before execution:
steps:
- id: main_agent
action: CallLLM
inputs:
thread: "{{thread.id}}"
- id: audit_check
action: CallLLM
inputs:
tool_filter: [audit_result]
response_tools:
- name: audit_result
description: Report audit findings
parameters:
type: object
properties:
passed: { type: boolean }
guidance: { type: string }
required: [passed]
messages:
- role: user
content: |
Task: {{inputs.task}}
Agent response: {{nodes.main_agent.response_text}}
Use audit_result to report whether the agent is on track.
- id: execute_audit
action: ExecuteTools
inputs:
tool_calls: "{{nodes.audit_check.tool_calls}}"
edges:
- from: execute_audit
cases:
- to: execute_tools
condition: responseData(nodes.execute_audit.tool_results, 'audit_result').passed == true
- to: guidance
condition: responseData(nodes.execute_audit.tool_results, 'audit_result').passed == falseThe response_tools feature creates structured output you can branch on. If the audit fails, inject guidance and let the primary agent try again.
When to use: High-stakes tasks, compliance requirements, or when a cheaper model should validate an expensive model’s decisions.
Tool Restrictions
Control available tools based on mode using tool_filter:
- id: call_llm
action: CallLLM
inputs:
tool_filter: "{{inputs.mode == 'plan' ? ['tag:readonly'] : inputs.tools}}"Filter options: tags (['tag:default']), specific tools (['view', 'grep']), or exclusions.
When to use: Planning modes (read-only), sandboxed exploration, role-specific tool access.
Pipelines
Sequential multi-step workflows where each step builds on previous results.
Running Steps After Agent Completes
Use edges to route from an agent’s completion to the next step:
nodes:
- id: implement
workflow: builtin://agent
- id: lint
run: make lints
- id: test
run: make test
edges:
- from: implement
cases:
- to: lint
- from: lint
cases:
- to: testChaining Outputs
Reference previous node outputs using nodes.<node_id>.<field>:
- id: implement
workflow: builtin://agent
thread:
inject:
role: user
content: |
TASK: {{nodes.improve_prompt.message.text}}
Working directory: {{nodes.create_worktree.path}}Common fields: message.text, exit_code, stdout, stderr, path, tool_results.
Conditional Next Steps
Branch based on results:
edges:
- from: lint
cases:
- to: test
condition: nodes.lint.exit_code == 0
- to: fix_lint
condition: nodes.lint.exit_code != 0Branch on: exit codes, tool calls (size(nodes.X.tool_calls) > 0), loop success (nodes.X.succeeded), custom outputs.
Verification Loops
Repeat until a condition is met using loop:
- id: implement_loop
loop:
max: 3
until: outputs.exit_code == 0
inline:
entry: implement
outputs:
exit_code: "{{nodes.verify.exit_code}}"
steps:
- id: implement
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: "{{iter.iteration == 0 ? trigger.message.content : 'Fix: ' + iter.previous.stderr}}"
- id: verify
run: make test
edges:
- from: implement
cases:
- to: verifyKey loop features: until (exit condition), max (safety limit), iter.iteration (current iteration, 0-indexed), iter.previous (previous outputs), outputs.succeeded (true if exited via until).
When to use: Test-driven development, retry-until-success, iterative refinement.
Escalation After Failures
When a loop exhausts its retries, escalate to a different agent:
edges:
- from: implement_loop
cases:
- to: announce_success
condition: nodes.implement_loop.succeeded == true
- to: escalate
condition: nodes.implement_loop.succeeded == false
nodes:
- id: escalate
workflow: builtin://agent
thread:
mode: fork
inject:
role: user
content: |
ESCALATION: {{nodes.implement_loop.max}} attempts failed.
Last error: {{nodes.implement_loop.stderr}}
inputs:
system_prompt: |
You are a SENIOR AGENT. Previous attempts failed.
Take a different approach if needed.When to use: Automatic escalation when initial attempts fail.
Multi-Agent Coordination
Multiple agents working together, either in parallel or alternating.
Parallel Execution with Join
Launch multiple agents simultaneously, then wait for all to complete:
nodes:
- id: impl_1
workflow: builtin://agent
thread:
mode: new()
inject:
role: user
content: "Implement in {{nodes.create_worktree_1.path}}"
- id: impl_2
workflow: builtin://agent
thread:
mode: new()
- id: implementations_done
join: all
edges:
- from: start
cases:
- to: impl_1
- to: impl_2
- from: impl_1
cases:
- to: implementations_done
- from: impl_2
cases:
- to: implementations_doneThe join: all node waits until all incoming edges complete. Use worktrees for isolated working directories.
When to use: Competitive implementations, exploring multiple approaches, reducing wall-clock time.
Turn-Taking (Proposer/Critic)
Alternating agents on the same thread see each other’s work:
- id: debate_loop
loop:
max: "{{inputs.rounds}}"
inline:
entry: proposer_turn
steps:
- id: proposer_turn
workflow: builtin://agent
thread: { mode: inherit }
inputs:
system_prompt: "You are the PROPOSER. Create and refine plans."
- id: critic_turn
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: "Challenge this plan: What could go wrong?"
inputs:
system_prompt: "You are the CRITIC. Find flaws and risks."
edges:
- from: proposer_turn
cases:
- to: critic_turnWhen to use: Planning/design review, stress-testing ideas, multi-perspective quality improvement.
Thread Isolation vs Shared Context
| Mode | Behavior | Use Case |
|---|---|---|
new() | Fresh thread, isolated | Independent parallel work |
inherit | Same thread, sees history | Turn-taking, shared context |
fork | Copy of thread, diverges | Review with context, won’t pollute original |
The inject option adds a message when entering the step:
thread:
mode: inherit
inject:
role: user
content: "Now review what was done above."Different Models Per Agent
Use groups to configure different settings for different roles:
groups:
Implementer:
inputs:
model: { type: model, default: "claude-4-sonnet" }
Reviewer:
inputs:
model: { type: model, default: "claude-4-opus" }
nodes:
- id: impl
workflow: builtin://agent
inputs:
model: "{{inputs.Implementer.model}}"
- id: review
workflow: builtin://agent
inputs:
model: "{{inputs.Reviewer.model}}"When to use: Cheaper models for routine work, expensive for complex decisions.
Combining Techniques
These techniques compose naturally. A sophisticated workflow might combine: pipeline (improve prompt → implement → verify), parallel execution in isolated worktrees, retry loops until tests pass, escalation to senior agent on failure, and multi-model review.
Start simple. A basic agent with compaction handles most tasks. Add verification loops for reliability, parallelism for exploration, auditing for oversight.
See Examples for complete workflow files.