Agent Variants
The standard agent loop (call LLM → execute tools → repeat) can be customized in several ways.Custom Approval Logic
Control when tools require approval using mode and conditional edges:Context Management
Long-running agents accumulate large contexts. Two techniques help: Compaction summarizes the conversation when it exceeds a threshold:Oversight and Auditing
Add a secondary agent that reviews the primary agent’s actions before execution:response_tool feature creates structured output you can branch on. The tool defines a JSON schema for the expected output format. A common pattern uses choice (enum) and value (explanation) properties. Response data is available via nodes.<execute_tools_node>.response_data.<tool_name>. If the audit fails, use .value to get the guidance and inject it for the primary agent to try again.
When to use: High-stakes tasks, compliance requirements, or when a cheaper model should validate an expensive model’s decisions.
Tool Restrictions
Control available tools based on mode usingtool_filter:
['tag:default']), specific tools (['view', 'grep']), or exclusions.
When to use: Planning modes (read-only), sandboxed exploration, role-specific tool access.
Structured Output with Response Tools
Response tools force the LLM to produce structured output you can programmatically use for routing, classification, or data extraction. Simple options-based response:- ExecuteTools required — You must execute the tool call to access
response_data - Access path —
nodes.<execute_node>.response_data.<tool_name>.<field> - LLM must call it — The response tool is the only way to complete; LLM cannot just respond with text
- Use for routing — Perfect for decisions that control workflow branching
Pipelines
Sequential multi-node workflows where each node builds on previous results.Running Steps After Agent Completes
Use edges to route from an agent’s completion to the next node:Chaining Outputs
Reference previous node outputs usingnodes.<node_id>.<field>:
message.text, exit_code, stdout, stderr, path, tool_results.
Conditional Next Steps
Branch based on results:size(nodes.X.tool_calls) > 0), loop output conditions, custom outputs.
Verification Loops
Repeat while a condition is true usingloop:
while (continue condition with iteration limits), iter.iteration (current iteration, 0-indexed), outputs.* (current iteration’s results in while condition).
Outputs requirement: When using outputs.* in the while condition, you must declare an outputs section in the inline workflow that maps inner node outputs to named fields. This creates a clear contract between the loop body and the while condition:
outputs section, the while condition receives raw inner node outputs keyed by node ID (for example, outputs.call_llm.tool_calls instead of outputs.tool_calls), which is fragile and makes refactoring difficult.
Do-while semantics: The loop always runs at least once. After each iteration, iter.iteration increments before the while condition is checked.
Iteration counting: In the loop body, iter.iteration is 0-indexed (0, 1, 2…). In the while check, it reflects completed iterations (1 after first, 2 after second). Use iter.iteration < N to run exactly N iterations.
When to use: Test-driven development, retry-while-failing, iterative refinement.
Multi-Agent Coordination
Multiple agents working together, either in parallel or alternating.Parallel Execution with Join
Launch multiple agents simultaneously, then wait for all to complete:join: all node waits until all incoming edges complete. Use worktrees for isolated working directories.
When to use: Competitive implementations, exploring multiple approaches, reducing wall-clock time.
Turn-Taking (Proposer/Critic)
Alternating agents on the same thread see each other’s work:Thread Isolation vs Shared Context
See Threads for complete documentation on thread modes (new, inherit, fork).
The inject option adds a message when entering the node:
Different Models Per Agent
Usegroups to configure different settings for different roles: