Creating Custom Workflows
Reliant workflows let you automate complex multi-step tasks, enforce development processes, and coordinate multiple agents. This guide walks you through creating your own workflows, starting with simple examples and building up to more advanced patterns.
When to Create Custom Workflows
Before building a custom workflow, consider whether you actually need one. Workflows shine in specific scenarios:
Automate repetitive multi-step tasks: If you find yourself repeatedly running the same sequence of agent interactions—like “analyze code, write tests, run tests, fix failures”—a workflow captures that pattern and makes it repeatable.
Enforce specific processes: Workflows can encode your team’s practices. A TDD workflow that requires tests to fail before allowing implementation. A code review workflow that requires two agents to approve changes. A security audit that runs after every feature implementation.
Create specialized agents: Sometimes you need an agent with specific tools, prompts, or behaviors. A workflow can define a “documentation writer” or “security auditor” persona with appropriate constraints.
Build multi-agent coordination: When you need multiple agents working together—whether in debate, parallel competition, or sequential handoff—workflows provide the orchestration.
If you just need to run a single agent with a specific prompt, consider using presets instead. Workflows are for when you need control flow, loops, or multiple agents.
Workflow File Location
Reliant automatically discovers workflow files in your project’s .reliant/workflows/ directory.
your-project/
├── .reliant/
│ └── workflows/
│ ├── code-review.yaml
│ ├── security-audit.yaml
│ └── release-prep.yaml
└── src/Naming convention: Use lowercase with hyphens (e.g., code-review.yaml). The filename becomes the workflow identifier.
Discovery: When you start Reliant, it scans for .yaml files in .reliant/workflows/. Changes require restarting Reliant or reloading workflows.
Anatomy of a Workflow
A workflow file has five key sections. Here’s the minimal structure:
# 1. Metadata - identifies the workflow
name: my-workflow
version: v0.0.1
description: A simple custom workflow
status: published
tag: agent
# 2. Inputs - parameters the workflow accepts
inputs:
model:
type: model
default: ""
description: LLM model to use
# 3. Entry point - where execution starts
entry: main_step
# 4. Nodes - the actual steps
nodes:
- id: main_step
workflow: builtin://agent
args:
model: "{{inputs.model}}"
# 5. Edges - flow control (optional for single-step workflows)
# edges: []Let’s understand each section.
Metadata
The top of your workflow file contains identification metadata:
| Field | Required | Description |
|---|---|---|
name | Yes | Unique identifier for the workflow |
version | No | Semantic version (e.g., v0.0.1) |
description | No | Human-readable description shown in UI |
status | No | Visibility: draft, published, or internal |
tag | No | Category for preset matching (typically agent) |
Use status: draft while developing—draft workflows don’t appear in the workflow picker but can still be tested directly.
Inputs
Inputs define what parameters your workflow accepts. Every input needs either a default value or required: true:
inputs:
model:
type: model
default: ""
description: LLM model to use
temperature:
type: number
default: 1.0
min: 0
max: 2
description: Response randomness
mode:
type: enum
enum: ["manual", "agent"]
default: "agent"
description: Execution mode
review_areas:
type: string
required: true
description: What aspects to reviewCommon input types: string, number, integer, boolean, enum, model, tools.
For complete input type documentation, see the Workflow Schema Reference.
Entry Point
The entry field specifies which node starts execution:
entry: main_stepFor parallel starts, use an array:
entry: [agent_1, agent_2, agent_3]Nodes
Nodes are the execution units. Each node has an id and a type that determines what it does:
Workflow nodes run a child workflow:
- id: run_agent
workflow: builtin://agent
args:
model: "{{inputs.model}}"Action nodes execute built-in activities:
- id: save_result
action: SaveMessage
args:
thread: "{{thread.id}}"
role: assistant
content: "Analysis complete!"Run nodes execute shell commands:
- id: run_tests
run: npm testLoop nodes repeat a sub-workflow:
- id: retry_loop
loop:
max: 5
until: outputs.exit_code == 0
inline:
# ... inline workflow definitionEdges
Edges define how execution flows between nodes. They’re only required when you have multiple nodes or need conditional routing:
edges:
- from: step_one
cases:
- to: step_two
label: nextFor conditional routing:
edges:
- from: run_tests
cases:
- to: success_handler
condition: nodes.run_tests.exit_code == 0
label: passed
- to: failure_handler
condition: nodes.run_tests.exit_code != 0
label: failedBuilding Your First Custom Workflow
Let’s build a practical workflow step by step: a code review workflow that analyzes code and provides structured feedback.
Step 1: Create the File
Create .reliant/workflows/code-review.yaml:
name: code-review
version: v0.0.1
description: Analyzes code and provides structured review feedback
status: draft
tag: agent
entry: reviewStep 2: Define Inputs
Think about what the user should be able to configure:
inputs:
model:
type: model
default: ""
description: LLM model to use
focus_areas:
type: string
default: "code quality, potential bugs, security concerns, performance"
description: What aspects of the code to reviewStep 3: Add the Review Node
The simplest approach uses the built-in agent workflow with a custom system prompt:
nodes:
- id: review
workflow: builtin://agent
thread:
mode: inherit
args:
model: "{{inputs.model}}"
mode: agent
system_prompt: |
You are a senior code reviewer. Analyze the code thoroughly and provide
actionable feedback.
Focus on: {{inputs.focus_areas}}
Structure your review as:
1. **Summary**: Brief overview of what the code does
2. **Strengths**: What's done well
3. **Issues**: Problems found (with severity: Critical/Major/Minor)
4. **Suggestions**: Recommended improvements
Be specific. Reference exact line numbers and code snippets.
Explain *why* something is an issue, not just *what* is wrong.Step 4: The Complete Workflow
Here’s the full workflow file:
name: code-review
version: v0.0.1
description: Analyzes code and provides structured review feedback
status: published
tag: agent
entry: review
inputs:
model:
type: model
default: ""
description: LLM model to use
focus_areas:
type: string
default: "code quality, potential bugs, security concerns, performance"
description: What aspects of the code to review
nodes:
- id: review
workflow: builtin://agent
thread:
mode: inherit
args:
model: "{{inputs.model}}"
mode: agent
system_prompt: |
You are a senior code reviewer. Analyze the code thoroughly and provide
actionable feedback.
Focus on: {{inputs.focus_areas}}
Structure your review as:
1. **Summary**: Brief overview of what the code does
2. **Strengths**: What's done well
3. **Issues**: Problems found (with severity: Critical/Major/Minor)
4. **Suggestions**: Recommended improvements
Be specific. Reference exact line numbers and code snippets.
Explain *why* something is an issue, not just *what* is wrong.Step 5: Test It
Run your workflow to test it. Start a chat and select your workflow, or use the CLI:
reliant run --workflow code-reviewChange status: draft to status: published once you’re satisfied with the behavior.
Adding Loops
Loops let a workflow repeat until a condition is met. This is essential for patterns like “keep trying until tests pass” or “iterate until the agent has no more tool calls.”
When to Use Loops
Use loops when you need:
- Retry logic: Run tests, if they fail have the agent fix issues, repeat until tests pass
- Agent cycles: Continue calling the LLM until it stops requesting tool calls
- Iterative refinement: Keep improving output until quality threshold is met
Loop Configuration
A loop node wraps a sub-workflow that runs repeatedly:
- id: fix_tests
loop:
max: 5 # Maximum iterations
until: outputs.exit_code == 0 # Exit condition
inline:
# The sub-workflow definition
entry: attempt_fix
inputs:
# Sub-workflow inputs
outputs:
exit_code: "{{nodes.run_tests.exit_code}}"
nodes:
- id: attempt_fix
workflow: builtin://agent
# ...
- id: run_tests
run: npm test| Field | Required | Description |
|---|---|---|
max | Yes | Maximum iterations before giving up |
until | No | CEL expression using outputs.* to exit early |
inline | Yes* | Inline sub-workflow definition |
workflow | Yes* | External workflow reference (alternative to inline) |
*One of inline or workflow is required.
Accessing Loop Context
Inside loops, you have access to the iter.* namespace:
| Variable | Description |
|---|---|
iter.iteration | Current iteration (0-indexed) |
iter.max | Maximum iterations configured |
iter.previous | Outputs from the previous iteration |
This is useful for adjusting behavior based on iteration:
thread:
inject:
role: user
content: |
{{iter.iteration == 0 ?
'Please implement this feature: ' + trigger.message.content :
'Tests are still failing. Previous error:\n' + iter.previous.stderr + '\n\nPlease fix.'}}Loop Outputs
After a loop completes, you can access:
| Output | Description |
|---|---|
nodes.<loop_id>.iterations | Number of iterations that ran |
nodes.<loop_id>.succeeded | true if exited via until, false if hit max |
nodes.<loop_id>.<output> | Any outputs from the final iteration |
Example: Fix Until Tests Pass
Here’s a workflow that keeps trying to fix test failures:
name: fix-tests
version: v0.0.1
description: Attempts to fix failing tests
status: published
tag: agent
entry: fix_loop
inputs:
model:
type: model
default: ""
max_attempts:
type: integer
default: 5
min: 1
max: 10
description: Maximum fix attempts
nodes:
- id: fix_loop
loop:
max: "{{inputs.max_attempts}}"
until: outputs.exit_code == 0
inline:
entry: fix_code
inputs:
model:
type: model
default: ""
outputs:
exit_code: "{{nodes.run_tests.exit_code}}"
stderr: "{{nodes.run_tests.stderr}}"
nodes:
- id: fix_code
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: |
{{iter.iteration == 0 ?
'Run the tests and fix any failures.' :
'Tests still failing:\n```\n' + iter.previous.stderr + '\n```\nPlease fix the issues.'}}
args:
model: "{{inputs.model}}"
mode: agent
- id: run_tests
run: npm test
edges:
- from: fix_code
cases:
- to: run_tests
label: verify
thread:
mode: inherit
args:
model: "{{inputs.model}}"
# Announce result
- id: report
action: SaveMessage
args:
thread: "{{thread.id}}"
role: assistant
content: |
{{nodes.fix_loop.succeeded ?
'✅ Tests passing after ' + string(nodes.fix_loop.iterations) + ' attempt(s)!' :
'❌ Could not fix tests after ' + string(nodes.fix_loop.iterations) + ' attempts.'}}
edges:
- from: fix_loop
cases:
- to: report
label: doneConditional Routing
Edges can include conditions to route execution based on step outputs. This enables workflows that handle success and failure differently.
Basic Conditional Edges
Use CEL expressions in the condition field:
edges:
- from: run_tests
cases:
- to: celebrate
condition: nodes.run_tests.exit_code == 0
label: success
- to: debug
condition: nodes.run_tests.exit_code != 0
label: failureCases are evaluated in order—the first matching condition wins. A case without a condition acts as a default fallback.
Available Context in Conditions
Edge conditions can access:
| Namespace | Description |
|---|---|
inputs.* | Workflow input values |
nodes.<id>.* | Outputs from completed nodes |
workflow.* | Workflow metadata |
# Check node output
condition: nodes.call_llm.tool_calls != null && size(nodes.call_llm.tool_calls) > 0
# Check input value
condition: inputs.mode == 'agent'
# Combine conditions
condition: nodes.verify.exit_code == 0 && inputs.require_approval == trueExample: Different Handling for Pass/Fail
name: test-and-report
version: v0.0.1
description: Runs tests and reports results differently based on outcome
status: published
tag: agent
entry: run_tests
nodes:
- id: run_tests
run: npm test
- id: success_report
action: SaveMessage
inputs:
thread: "{{thread.id}}"
role: assistant
content: |
✅ All tests passing!
```
{{nodes.run_tests.stdout}}
```
- id: failure_analysis
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: |
Tests failed. Analyze the failures and suggest fixes:
```
{{nodes.run_tests.stderr}}
```
inputs:
mode: agent
system_prompt: |
You are a debugging assistant. Analyze test failures and provide
specific, actionable fixes. Reference exact error messages and
suggest code changes.
edges:
- from: run_tests
cases:
- to: success_report
condition: nodes.run_tests.exit_code == 0
label: passed
- to: failure_analysis
condition: nodes.run_tests.exit_code != 0
label: failedMulti-Agent Workflows
Complex tasks often benefit from multiple agents with different roles. Reliant supports several multi-agent patterns.
Using Groups for Agent Configuration
Groups let you organize inputs for different agents in your workflow:
groups:
Reviewer:
tag: agent
description: Settings for the code reviewer agent
inputs:
model:
type: model
default: ""
system_prompt:
type: string
default: |
You are a thorough code reviewer. Focus on correctness and maintainability.
Fixer:
tag: agent
description: Settings for the agent that fixes issues
inputs:
model:
type: model
default: ""
system_prompt:
type: string
default: |
You fix code issues identified by reviewers. Make minimal changes.Access group inputs with the inputs.GroupName.field syntax:
- id: review
workflow: builtin://agent
inputs:
model: "{{inputs.Reviewer.model}}"
system_prompt: "{{inputs.Reviewer.system_prompt}}"Groups appear as expandable sections in the workflow configuration UI, making it easy for users to customize each agent’s behavior.
Thread Modes for Coordination
Thread configuration controls how agents share context:
| Mode | Description | Use Case |
|---|---|---|
inherit | Use parent’s thread | Agents that should see each other’s work |
new() | Create isolated thread | Independent parallel agents |
fork | Copy parent thread at start | Agents that need initial context but work independently |
# Shared context - critic sees proposer's work
- id: critic
workflow: builtin://agent
thread:
mode: inherit
# Isolated - parallel agents don't interfere
- id: racer_1
workflow: builtin://agent
thread:
mode: new()
key: racer_1
# Forked - starts with context, then independent
- id: alternative
workflow: builtin://agent
thread:
mode: forkMessage Injection
Use thread.inject to add context when an agent starts:
- id: critic
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: |
Now act as a devil's advocate. Challenge the above plan:
- What could go wrong?
- What assumptions might be incorrect?Example: Two-Agent Review
Here’s a workflow where one agent reviews code and another validates the review:
name: double-review
version: v0.0.1
description: Code review with validation
status: published
tag: agent
entry: initial_review
inputs:
model:
type: model
default: ""
groups:
Reviewer:
tag: agent
description: Primary code reviewer
inputs:
system_prompt:
type: string
default: |
You are a senior code reviewer. Analyze code for bugs, security issues,
and maintainability concerns. Provide specific, actionable feedback.
Validator:
tag: agent
description: Validates the review quality
inputs:
system_prompt:
type: string
default: |
You validate code reviews. Check that the reviewer:
- Identified real issues (not false positives)
- Provided actionable suggestions
- Didn't miss obvious problems
Provide a brief assessment and any additional issues missed.
nodes:
- id: initial_review
workflow: builtin://agent
thread:
mode: inherit
inputs:
model: "{{inputs.model}}"
mode: agent
system_prompt: "{{inputs.Reviewer.system_prompt}}"
- id: validate_review
workflow: builtin://agent
thread:
mode: inherit
inject:
role: user
content: |
Please validate the above code review. Is it thorough and accurate?
inputs:
model: "{{inputs.model}}"
mode: agent
system_prompt: "{{inputs.Validator.system_prompt}}"
edges:
- from: initial_review
cases:
- to: validate_review
label: validateFor more multi-agent patterns including parallel execution, debate, and auditing, see Multi-Agent Patterns.
Testing Workflows
Running Your Workflow
Test a workflow by selecting it in the Reliant UI or using the CLI:
# Run interactively
reliant run --workflow my-workflow
# With specific inputs
reliant run --workflow my-workflow --input model=claude-4-sonnetValidation Errors
Reliant validates workflows on load. Common errors:
“input X has no default and is not required”: Every input must have either default or required: true.
“node X not found”: An edge references a non-existent node ID. Check spelling.
“entry node X not found”: The entry field references a node that doesn’t exist.
“loop must have either ‘workflow’ or ‘inline’”: Loop nodes need either an external workflow reference or an inline definition.
Common Mistakes
Forgetting thread configuration: If agents don’t seem to see each other’s work, check that you’re using thread: inherit (not new()).
Wrong CEL syntax: Template expressions use {{}} for interpolation. Edge conditions are bare CEL without the braces.
# Input value - uses template syntax
model: "{{inputs.model}}"
# Edge condition - bare CEL
condition: nodes.test.exit_code == 0Missing outputs in loops: The until condition uses outputs.*. Make sure your loop’s inline workflow defines the outputs you’re checking.
Step reference timing: You can only reference a step’s outputs after that step has completed. Edge conditions can only use steps that are upstream from the current node.
Config-as-Code
Version Control
Workflow files are meant to be checked into git alongside your code:
your-project/
├── .reliant/
│ └── workflows/
│ ├── code-review.yaml
│ ├── deploy-check.yaml
│ └── security-audit.yaml
├── src/
└── .gitignoreThis gives you:
- History: Track who changed what and when
- Review: Workflow changes go through code review
- Consistency: Everyone on the team uses the same workflows
- Rollback: Easily revert problematic workflow changes
Team Sharing
When workflows are in your repository:
- Team members get workflows automatically when they clone/pull
- Workflow changes can be reviewed alongside code changes
- Branch-specific workflows are possible (feature branches can experiment)
Workflow Versioning
Use the version field to track workflow iterations:
name: code-review
version: v1.2.0Follow semantic versioning:
- Patch (v1.0.1): Bug fixes, prompt tweaks
- Minor (v1.1.0): New optional inputs, additional steps
- Major (v2.0.0): Breaking changes to inputs or behavior
Next Steps
Now that you understand workflow fundamentals:
- Multi-Agent Patterns: Learn advanced orchestration patterns like debate, parallel competition, and auditing
- Presets: Create reusable parameter bundles for your workflows
- Workflow Schema Reference: Complete field-by-field documentation
- CEL Expressions Reference: Master the expression language for dynamic values
Start simple—a single-node workflow with a custom system prompt—and add complexity as needed. The best workflows solve real problems you encounter repeatedly.