A router node presents a list of candidates to an LLM classifier, which picks the best match for the current conversation. In workflow routing mode the candidates are external workflows — the router picks one and executes it as a sub-workflow. In node routing mode the candidates are other nodes in the same workflow — the router picks one and the engine dispatches execution to it. Think of it as switch for workflows (or nodes), where the case selector is an LLM reading your thread history.
Routers are useful when the right workflow depends on the user’s intent rather than on deterministic state. A triage router reading the first user message can decide between a researcher, an implementer, and a debugger far more naturally than a hand-written condition expression.
When to use a router
Routers shine in a few specific situations:
- Triage. Route an incoming chat to the right specialist workflow (coder, researcher, debugger, reviewer) based on what the user actually asked for.
- Intent detection. Pick a workflow per user intent without hand-writing CEL conditions over message content — the LLM classifier handles the messy natural-language bit.
- Dynamic dispatch. When the set of candidate workflows changes based on project, preset visibility, or available presets.
- Intra-workflow routing. Pick which phase or step to jump to within a single workflow — brainstorm vs. implement vs. review — without splitting logic across separate workflow files.
A router is not the right tool when:
- A plain
condition on a workflow node would do — deterministic routing on exit codes, tool outputs, or input values is faster and cheaper than a CallLLM round trip.
- The set of workflows is small and stable, and you’d prefer explicit branches in an
edges block.
- You need repeatable, auditable routing decisions. Routers depend on an LLM and may pick differently across runs.
Minimal example
Here’s a triage router with two candidates. It reads the conversation history, picks between a general agent and a code-review specialist, and runs whichever it selects:
nodes:
- id: triage
type: router
system_prompt: |
Pick the best agent for this task. Prefer the general agent for
implementation and investigation. Pick the code-review specialist only
when the user is asking for a review, audit, or quality analysis.
model:
tags: [fast]
thread:
mode: new
workflows:
- ref: builtin://agent
presets:
- general
- researcher
- ref: builtin://agent
presets:
- code_reviewer
description: Code review specialist — use when the user wants code reviewed, audited, or analyzed for quality.
Notice a few things about this shape:
workflows is a list of RouterWorkflowCandidate entries, not a single value. The router requires at least one candidate and will fail validation with zero.
model configures the classifier LLM, not the selected child workflow. The child workflow runs with whatever model its own preset configures. A small, fast model is usually fine here — classification is a short call.
thread.mode controls the child workflow’s thread, just like a regular workflow or loop node. Defaults to new.
- Two candidates can share the same
ref (here both are builtin://agent) and differ only in presets and description. This is how you expose multiple “personalities” of the same base workflow to the classifier.
Router nodes are structural, so the fields above sit directly on the node. You can also wrap them under an explicit args: key if you prefer that style — both are parsed identically.
Candidate structure
Each entry in workflows is a RouterWorkflowCandidate with three fields:
| Field | Required | Description |
|---|
ref | Yes | Workflow reference string, like builtin://agent or project://my-flow. Empty refs fail validation. |
presets | No | List of preset slugs to restrict the candidate to. Empty means “all valid presets for this workflow.” |
description | No | Override string shown to the classifier LLM. Falls back to the workflow’s own description: when empty. |
The candidate list must be non-empty — a router without candidates fails structural validation at load time with router node requires at least one candidate workflow.
The description field is what the LLM actually sees when deciding. Treat it as user-facing copy for an audience of one (the classifier). A vague description like “the agent” will produce vague routing. A sharp description like “Code review specialist — use when the user wants code reviewed, audited, or analyzed for quality” gives the classifier something to latch onto.
A router node supports these fields. Exactly one of workflows or nodes is required — they are mutually exclusive.
workflows: The candidate workflow list described above. Cannot be combined with nodes.
nodes: A list of RouterNodeCandidate entries for node routing. Each entry references another node in the same workflow by id. Cannot be combined with workflows.
system_prompt: Custom system prompt for the classifier LLM. When set, it replaces the default routing system prompt and is prepended to the automatically generated workflow catalog. Use it to steer the classifier (“prefer the researcher when in doubt”, “never pick the debugger unless the user mentions an error”).
model: A model selector for the classifier. Defaults to whatever the platform considers a fast model. The classifier runs one short CallLLM, so you usually want speed over capability here.
thread: Thread config for the selected child workflow — same ThreadConfig shape used by workflow and loop nodes. Defaults to mode: new. When the child thread mode creates a new thread, the router automatically injects the rewritten prompt as a user message on that thread so the selected workflow has conversation context to work with.
fallback: A preset name to fall back to when the classifier picks a preset that is not allowed for the selected workflow. See Fallback behavior below for the exact semantics — fallback does not cover every failure mode.
project: Working directory override for the selected sub-workflow, same shape as on a workflow node.
outputs: A map of output name to CEL expression. Lets you expose computed values as top-level node outputs. See Custom outputs.
For the full auto-generated input/output table, see the Router entry in the node reference. Note that the reference’s auto-generated input list currently omits workflows because the doc generator drops message-typed repeated fields — this narrative is the authoritative source for that field.
Outputs
After the router runs, the available fields on nodes.<router_id> depend on the routing mode.
Workflow routing exposes five fields:
| Output | Type | Description |
|---|
selected_workflow | string | The workflow ref the classifier picked, e.g. builtin://agent. |
selected_preset | string | The preset name applied to the selected workflow. |
prompt | string | The rewritten prompt the classifier produced for the child. This is what got injected into the child thread when the thread mode created a new thread. |
reasoning | string | Short natural-language explanation from the classifier for why it picked this workflow and preset. Useful for logging, UI, and debugging misroutes. |
outputs | object | The selected child workflow’s own declared outputs, nested under this key. |
Node routing exposes two fields:
| Output | Type | Description |
|---|
selected_node | string | The id of the node the classifier picked. |
reasoning | string | Short natural-language explanation from the classifier for why it picked this node. |
Because outputs holds whatever the child workflow exposed, its shape depends on which candidate ran. For the built-in agent workflow, that gives you access to things like nodes.triage.outputs.response_text and nodes.triage.outputs.message. The validator allows dynamic sub-field access on outputs and, when it can load the candidate workflows, unions their declared output keys for type hints.
Downstream nodes can branch on the routing decision directly:
edges:
- from: triage
cases:
- to: run_reviewer_followup
condition: nodes.triage.selected_preset == 'code_reviewer'
default: default_handoff
Custom outputs
The outputs field on RouterArgs lets you declare named outputs that get evaluated against the router’s execution context and then exposed as top-level fields on nodes.<router_id>. Each value is a CEL expression — no {{ }} delimiters, because it’s already a CEL context.
The CEL context for each expression contains exactly the five fields listed above: selected_workflow, selected_preset, prompt, reasoning, and outputs. You can reach into outputs to surface specific child fields as first-class:
- id: triage
type: router
workflows:
- ref: builtin://agent
presets: [general, researcher]
- ref: builtin://agent
presets: [code_reviewer]
description: Code review specialist
outputs:
chosen: "selected_workflow"
why: "reasoning"
final_answer: "outputs.response_text"
After this node runs, nodes.triage.chosen, nodes.triage.why, and nodes.triage.final_answer are all available to edges and downstream nodes as if they were native router outputs. This is handy when the rest of your workflow shouldn’t need to know — or care — that a node happens to be a router.
Declared outputs are merged with the fixed ones, not replaced. nodes.triage.selected_workflow still resolves even when you declare chosen: "selected_workflow" — you’re adding an alias, not renaming.
Node routing
Node routing uses the same type: router but swaps workflows for nodes. Instead of dispatching to an external workflow, the router picks another node in the same workflow and the engine continues execution there. This is useful when a single workflow has distinct phases — brainstorming, implementation, review — and the right starting point depends on user intent.
- id: classify
type: router
model:
tags: [fast]
system_prompt: |
Pick which phase to start based on the user's request.
Prefer brainstorm for open-ended tasks.
nodes:
- id: brainstorm
description: "New feature — start from problem understanding"
- id: implement
description: "Clear task — skip straight to implementation"
- id: review
description: "Code review request — jump to review phase"
fallback: brainstorm
Node candidate structure
Each entry in nodes is a RouterNodeCandidate with two fields:
| Field | Required | Description |
|---|
id | Yes | Must reference an existing node in the same workflow. Invalid ids fail validation. |
description | No | Text shown to the classifier LLM. As with workflow routing, sharp descriptions produce better routing — the classifier only sees id and description. |
The thread, project, and outputs fields on the router node don’t apply to node routing — there is no child workflow to configure. If you set them alongside nodes, they are ignored.
Node routing outputs
After the router runs, nodes.<router_id>.selected_node contains the id of the chosen node and nodes.<router_id>.reasoning contains the classifier’s explanation.
Dispatch behavior
If the router has explicit edges, they work normally — you can use nodes.<router_id>.selected_node in edge conditions to branch however you like:
edges:
- from: classify
cases:
- to: brainstorm
condition: nodes.classify.selected_node == 'brainstorm'
- to: implement
condition: nodes.classify.selected_node == 'implement'
default: review
If the router has no outgoing edges, the engine dispatches directly to the selected node — no extra wiring needed. This keeps simple routers compact: define the candidates and let the engine handle the jump.
Node routing is a lightweight alternative to workflow routing when all the logic already lives in one workflow. If you find yourself creating single-node wrapper workflows just to use a router, switch to node routing instead.
Phase Skipping in Pipelines
Node routers shine in multi-phase pipelines where users may want to start at different points. Several builtin workflows use this pattern:
- one-ring — routes to planning, write_tests, or impl_loop
- bmad-lite — routes to ideate, requirements, architecture, or implement
Example:
- id: classify
type: router
nodes:
- id: planning
description: "Complex task needing research and planning"
- id: implement
description: "Clear task — skip straight to implementation"
fallback: planning
With no outgoing edges, the engine dispatches directly to the selected node. This lets users say “just implement it” and skip research/planning phases entirely.
Classification reliability
The router’s decision quality is dominated by two things: the descriptions you give it and the model you pick.
Descriptions are doing all the work. The classifier prompt is built by concatenating each candidate’s ref, description, input schema, available presets, and full workflow YAML. Of those, the description and preset descriptions are the pieces an LLM actually reads carefully. Vague descriptions (“the agent”, “general purpose”) leave the classifier guessing; concrete descriptions (“Code review specialist — use when the user wants code reviewed, audited, or analyzed for quality”) produce consistent routing.
The fast-model default is a trade-off. Routers default to a fast model because classification is a short, structured call and latency matters at the top of a conversation. Fast models handle obvious routing fine but can misclassify borderline prompts. If you find the router consistently picking the wrong workflow on ambiguous input, either tighten your descriptions or bump the classifier model:
- id: triage
type: router
model:
tags: [smart] # or a specific model ref
workflows: [...]
Use system_prompt to steer. When descriptions alone aren’t enough, a short custom system prompt is usually faster than redesigning the candidate list:
system_prompt: |
When the user's intent is ambiguous, prefer the researcher over the
implementer — exploration is cheaper than a wrong edit.
Fallback behavior
The fallback field is narrower than it might look. It’s a preset name, and it’s only consulted when the classifier picks a valid workflow but an invalid preset for that workflow — for example, if the LLM hallucinates a preset name or picks one that isn’t in the candidate’s allowed list. When that happens, the router retries the selection with the fallback preset on the same workflow the LLM picked.
fallback does not rescue other failure modes:
- If the CallLLM activity itself fails (network, rate limit, model error), the router errors out.
- If the classifier picks a workflow that isn’t in the candidate list, the router errors out.
- If
fallback itself isn’t valid for the selected workflow, the router errors out.
In short, fallback is a correctness guard against hallucinated presets, not a general-purpose error handler. If you need to recover from classification failures, wrap the router in an edge that catches the error and routes to a default workflow.
Interaction with threads
Routers create a child execution just like workflow and loop nodes, and the child gets its own thread controlled by the thread field. The default is mode: new, which is almost always what you want for triage — the selected workflow starts with a clean thread and receives the classifier’s rewritten prompt as its first user message.
When thread.mode produces a new or forked thread, the router saves the rewritten prompt (decision.prompt) to the child thread as a user message before the child workflow starts executing. This is how the selected workflow “sees” the routed request even when it has no dedicated input field for it. If you use thread.mode: inherit, the child shares the parent thread and no injection happens — the child reads the existing conversation history directly.
For the full list of thread modes and inject semantics, see Threads.
Routers vs workflow and loop nodes
The three sub-workflow node types look similar but serve different jobs:
workflow node: Runs a single, statically chosen workflow. Use when you know at authoring time which workflow should run.
loop node: Repeats a workflow while a condition holds (or over an item list for parallel execution). Use for retry-until-passing, iterative refinement, and fan-out over inputs.
router node: Picks one workflow (or node) at runtime from a candidate list using an LLM. Use when the choice depends on user intent and would be painful to encode as deterministic conditions. In workflow routing mode (workflows) it selects and executes an external workflow. In node routing mode (nodes) it selects another node in the same workflow and the engine jumps to it.
A router is essentially a “workflow node whose ref is decided by an LLM right before it runs” — or, in node routing mode, a “goto whose target is decided by an LLM.” If you ever find yourself writing a CallLLM node followed by a workflow node that reads the result, you probably want a router instead.