Workflow Examples
Copy-and-adapt snippets for building workflows. Each example shows the minimal YAML needed for a specific technique.
Basic Patterns
Simple Agent Loop with Auto-Approve
The standard agent pattern: call LLM, execute tools, repeat until done.
- id: agent_loop
loop:
max: 100
until: outputs.tool_calls == null || size(outputs.tool_calls) == 0
inline:
entry: call_llm
outputs:
tool_calls: "{{nodes.call_llm.tool_calls}}"
steps:
- id: call_llm
action: CallLLM
inputs:
thread: "{{thread.id}}"
model: "{{inputs.model}}"
- id: execute_tools
action: ExecuteTools
inputs:
thread: "{{thread.id}}"
tool_calls: "{{nodes.call_llm.tool_calls}}"
edges:
- from: call_llm
cases:
- to: execute_tools
condition: nodes.call_llm.tool_calls != null && size(nodes.call_llm.tool_calls) > 0Key points:
untilcondition checks for empty/null tool_calls to exit- No approval step means tools execute automatically
Manual Approval Mode
Add an approval gate before tool execution.
steps:
- id: call_llm
action: CallLLM
inputs:
thread: "{{thread.id}}"
- id: approval
action: Approval
inputs:
title: Approve tool execution?
description: "The agent wants to execute tool(s)"
timeout: 1h
actions:
- type: approve
label: "Approve"
- type: deny
label: "Deny"
- id: execute_tools
action: ExecuteTools
inputs:
thread: "{{thread.id}}"
tool_calls: "{{nodes.call_llm.tool_calls}}"
edges:
- from: call_llm
cases:
- to: approval
condition: nodes.call_llm.tool_calls != null && size(nodes.call_llm.tool_calls) > 0
- from: approval
cases:
- to: execute_tools
condition: nodes.approval.status == 'approved'Key points:
- Approval blocks until user responds or timeout
- Check
nodes.approval.status == 'approved'before proceeding
Plan Mode (Read-Only Tools)
Restrict agent to read-only tools for planning without modifications.
inputs:
mode:
type: enum
enum: ["agent", "plan", "manual"]
default: "agent"
nodes:
- id: call_llm
action: CallLLM
inputs:
thread: "{{thread.id}}"
# Filter tools based on mode
tool_filter: "{{inputs.mode == 'plan' ? ['tag:readonly', 'tag:mcp'] : ['tag:default', 'tag:mcp']}}"
# Optionally add planning-specific system prompt
system_prompt: "{{inputs.mode == 'plan' ? inputs.planning_prompt : inputs.system_prompt}}"Key points:
- Use
tool_filterto restrict available tools tag:readonlyincludes view, grep, glob, etc.- Pair with a planning-specific system prompt
Conditional Logic
Branch Based on Exit Code
Route workflow based on command success/failure.
steps:
- id: run_tests
run: make test
- id: on_success
action: SaveMessage
inputs:
thread: "{{thread.id}}"
role: assistant
content: "Tests passed!"
- id: on_failure
action: SaveMessage
inputs:
thread: "{{thread.id}}"
role: assistant
content: "Tests failed: {{nodes.run_tests.stderr}}"
edges:
- from: run_tests
cases:
- to: on_success
condition: nodes.run_tests.exit_code == 0
label: success
- to: on_failure
condition: nodes.run_tests.exit_code != 0
label: failureKey points:
runnodes exposeexit_code,stdout,stderr- Use conditions to branch on exit_code
Branch Based on Tool Calls Present
Check if LLM made tool calls to decide next step.
edges:
- from: call_llm
cases:
# LLM wants to use tools
- to: execute_tools
condition: nodes.call_llm.tool_calls != null && size(nodes.call_llm.tool_calls) > 0
label: has_tools
# LLM responded without tools (done)
- to: complete
label: no_toolsKey points:
- Always check both
!= nullandsize() > 0 - The label-only case acts as default (no condition)
Loop Until Condition Met
Retry until tests pass or max attempts reached.
- id: implement_loop
loop:
max: 5
until: outputs.exit_code == 0
inline:
entry: implement
outputs:
exit_code: "{{nodes.verify.exit_code}}"
stderr: "{{nodes.verify.stderr}}"
steps:
- id: implement
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: |
{{iter.iteration == 0
? trigger.message.content
: 'Fix these errors:\n' + iter.previous.stderr}}
- id: verify
run: make test
edges:
- from: implement
cases:
- to: verifyKey points:
untilcondition evaluated after each iteration- Access previous iteration data via
iter.previous.* iter.iterationis 0-indexed
Multi-Agent
Two Agents Taking Turns on Same Thread
Agents alternate on shared context (e.g., proposer/critic debate).
- id: debate_loop
loop:
max: 3
inline:
entry: proposer
steps:
- id: proposer
workflow: builtin://agent
thread:
mode: inherit
inputs:
system_prompt: "You are the PROPOSER. Create or refine the plan."
- id: critic
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: "Challenge this plan. What could go wrong?"
inputs:
system_prompt: "You are the CRITIC. Find flaws and edge cases."
edges:
- from: proposer
cases:
- to: critic
thread:
mode: inheritKey points:
- Both agents use
thread: mode: inheritto share context - Each agent sees what the other wrote
- Use
injectto add turn-specific instructions
Parallel Agents with Join
Launch multiple agents simultaneously, wait for all to complete.
nodes:
- id: impl_1
workflow: builtin://agent
thread:
mode: fork
key: impl_1
inject:
role: user
content: "Implement approach A"
- id: impl_2
workflow: builtin://agent
thread:
mode: fork
key: impl_2
inject:
role: user
content: "Implement approach B"
- id: implementations_done
join: all
- id: review
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: "Review both implementations and pick the winner."
edges:
# Start both in parallel
- from: some_previous_step
cases:
- to: impl_1
- from: some_previous_step
cases:
- to: impl_2
# Both feed into join
- from: impl_1
cases:
- to: implementations_done
- from: impl_2
cases:
- to: implementations_done
# After join
- from: implementations_done
cases:
- to: reviewKey points:
thread: mode: forkcreates isolated threads that inherit parent context- Use unique
keyvalues for each parallel branch join: allwaits for all incoming edges
Groups for Different Model Configs
Configure different settings for each agent type.
inputs:
model:
type: model
default: ""
description: Default model for all agents
groups:
Implementer:
tag: agent
inputs:
model:
type: model
default: ""
temperature:
type: number
default: 1.0
system_prompt:
type: string
default: "You are an implementation agent."
Reviewer:
tag: agent
inputs:
model:
type: model
default: ""
temperature:
type: number
default: 0.7
system_prompt:
type: string
default: "You are a code reviewer."
nodes:
- id: implement
workflow: builtin://agent
inputs:
# Fall back to workflow default if group model is empty
model: "{{inputs.Implementer.model != '' ? inputs.Implementer.model : inputs.model}}"
temperature: "{{inputs.Implementer.temperature}}"
system_prompt: "{{inputs.Implementer.system_prompt}}"
- id: review
workflow: builtin://agent
inputs:
model: "{{inputs.Reviewer.model != '' ? inputs.Reviewer.model : inputs.model}}"
temperature: "{{inputs.Reviewer.temperature}}"
system_prompt: "{{inputs.Reviewer.system_prompt}}"Key points:
- Groups appear as collapsible sections in UI
- Use
tag: agentto enable preset picker - Reference group inputs as
inputs.GroupName.field
Context Management
Filter Large Tool Results with CallLLM
Reduce context bloat by summarizing large outputs.
steps:
- id: execute_tools
action: ExecuteTools
inputs:
thread: "{{thread.id}}"
tool_calls: "{{nodes.call_llm.tool_calls}}"
# Filter large results before saving
- id: filter_results
action: CallLLM
save_message:
role: tool
content: "{{output.response_text}}"
tool_results: "{{nodes.execute_tools.tool_results}}"
inputs:
thread: "{{thread.id}}"
model: claude-sonnet-4.1
tools: false
ephemeral: true
system_prompt: |
Extract only relevant information from tool results:
- Keep file paths and line numbers
- Keep error messages
- Remove verbose/redundant output
messages:
- role: user
content: |
Filter these tool results:
{{toJson(nodes.execute_tools.tool_results)}}
# Save small results directly
- id: save_results
action: SaveMessage
inputs:
thread: "{{thread.id}}"
role: tool
tool_results: "{{nodes.execute_tools.tool_results}}"
edges:
- from: execute_tools
cases:
- to: filter_results
condition: nodes.execute_tools.total_result_chars > 4000
label: filter_large
- to: save_results
label: save_smallKey points:
- Check
total_result_charsto decide filtering - Use
ephemeral: trueso filter call doesn’t add to thread - Save filtered content with original tool_results for proper UI display
Compact When Tokens Exceed Threshold
Trigger context compaction after tool execution.
- id: compact
action: Compact
timeout: "10m"
save_message:
condition: "{{output.compacted}}"
role: "{{output.message.role}}"
content: "{{output.message.text}}"
context_sequence: "{{output.context_sequence}}"
inputs:
thread: "{{thread.id}}"
edges:
- from: execute_tools
cases:
- to: compact
condition: nodes.execute_tools.thread_token_count > 160000
label: compact_neededKey points:
thread_token_countavailable after ExecuteTools or SaveMessage- Compact creates new context_sequence with summary
- Only save message if compaction actually occurred
Conditional Message Saving
Save messages only under certain conditions.
- id: verify
run: go test ./...
save_message:
condition: "{{output.exit_code != 0}}"
role: user
content: |
Tests failed. Please fix:
```
{{output.stderr}}
```Key points:
save_message.conditioncontrols whether message is saved- Useful for feedback loops (only inject on failure)
- Message content can reference step outputs via
output.*
Approvals and Oversight
Custom Approval with Multiple Actions
Offer multiple response options beyond approve/deny.
- id: approval
action: Approval
inputs:
title: "Review proposed changes"
description: "The agent wants to modify files"
timeout: 30m
actions:
- type: approve
label: "Approve All"
- type: approve
label: "Approve with Caution"
value: "caution"
- type: deny
label: "Reject"
- type: custom
label: "Modify Request"
value: "modify"
edges:
- from: approval
cases:
- to: execute_tools
condition: nodes.approval.status == 'approved'
- to: get_modifications
condition: nodes.approval.action_value == 'modify'Key points:
- Multiple approve actions can have different
valuefields - Access chosen action via
nodes.approval.action_value type: customfor non-standard responses
Audit Check Before Tool Execution
Run an auditor agent before allowing tool execution.
steps:
- id: main_agent
action: CallLLM
inputs:
thread: "{{thread.id}}"
- id: audit_check
action: CallLLM
inputs:
thread: "{{thread.id}}"
model: claude-sonnet-4.1
tool_filter: [audit_result]
response_tools:
- name: audit_result
description: Report audit findings
parameters:
type: object
properties:
passed:
type: boolean
guidance:
type: string
required: [passed]
messages:
- role: user
content: |
Review this action. Is the agent on track?
Response: {{nodes.main_agent.response_text}}
Tool calls: {{size(nodes.main_agent.tool_calls)}}
- id: execute_audit
action: ExecuteTools
inputs:
tool_calls: "{{nodes.audit_check.tool_calls}}"
- id: execute_tools
action: ExecuteTools
inputs:
tool_calls: "{{nodes.main_agent.tool_calls}}"
edges:
- from: main_agent
cases:
- to: audit_check
condition: nodes.main_agent.tool_calls != null && size(nodes.main_agent.tool_calls) > 0
- from: audit_check
cases:
- to: execute_audit
condition: nodes.audit_check.tool_calls != null
- from: execute_audit
cases:
- to: execute_tools
condition: responseData(nodes.execute_audit.tool_results, 'audit_result').passed == trueKey points:
- Use
response_toolsto force structured output - Extract data with
responseData(tool_results, 'tool_name') - Auditor can use cheaper/faster model
Response Tools for Structured Feedback
Force LLM to provide structured responses via a “response tool.”
- id: validation_report
action: CallLLM
inputs:
thread: "{{thread.id}}"
tool_filter: [validation_result]
response_tools:
- name: validation_result
description: Report validation results
parameters:
type: object
properties:
passed:
type: boolean
description: True if validation passed
issues:
type: string
description: Issues found (required if passed is false)
confidence:
type: number
minimum: 0
maximum: 1
required: [passed]
- id: execute_response
action: ExecuteTools
inputs:
tool_calls: "{{nodes.validation_report.tool_calls}}"
# Access structured data
outputs:
passed: "{{responseData(nodes.execute_response.tool_results, 'validation_result').passed}}"
issues: "{{responseData(nodes.execute_response.tool_results, 'validation_result').issues}}"Key points:
response_toolsare synthetic tools only the LLM can call- Must execute tools to capture the structured data
- Use
responseData()helper to extract by tool name
Worktrees
Create Worktree for Isolated Work
Give an agent its own working directory.
- id: create_worktree
action: CreateWorktree
inputs:
name: "feature-{{workflow.id}}"
base_branch: "{{has(workflow.current_branch) ? workflow.current_branch : 'main'}}"
force: true
- id: implement
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: |
Implement the feature in: {{nodes.create_worktree.path}}Key points:
- Include
workflow.idin name for uniqueness force: trueoverwrites existing worktree with same name- Reference worktree path via
nodes.create_worktree.path
Copy Env Files to Worktree
Include configuration files in new worktree.
- id: create_worktree
action: CreateWorktree
inputs:
name: "impl-{{workflow.id}}"
base_branch: main
copy_files:
- .env
- .env.local
- config/secrets.yaml
force: trueKey points:
copy_filessearches recursively for matching filenames- Directory structure is preserved (e.g.,
frontend/.env→worktree/frontend/.env) - Files are copied from source repo, not current worktree
Multiple Parallel Worktrees
Create isolated environments for competing implementations.
nodes:
- id: create_wt_1
action: CreateWorktree
inputs:
name: "compete-1-{{workflow.id}}"
base_branch: main
copy_files: [.env]
force: true
- id: create_wt_2
action: CreateWorktree
inputs:
name: "compete-2-{{workflow.id}}"
base_branch: main
copy_files: [.env]
force: true
- id: worktrees_ready
join: all
- id: impl_1
workflow: builtin://agent
thread:
mode: new()
key: impl_1
inject:
role: user
content: "Work in: {{nodes.create_wt_1.path}}"
- id: impl_2
workflow: builtin://agent
thread:
mode: new()
key: impl_2
inject:
role: user
content: "Work in: {{nodes.create_wt_2.path}}"
edges:
# Create worktrees in parallel
- from: start
cases:
- to: create_wt_1
- from: start
cases:
- to: create_wt_2
# Wait for both
- from: create_wt_1
cases:
- to: worktrees_ready
- from: create_wt_2
cases:
- to: worktrees_ready
# Launch implementations in parallel
- from: worktrees_ready
cases:
- to: impl_1
- from: worktrees_ready
cases:
- to: impl_2Key points:
- Create worktrees in parallel for faster setup
- Use
thread: mode: new()so implementations don’t share context - Each parallel edge needs its own
- from:block