FAQ
Quick answers to workflow and feature questions. For getting started, see Quick Start.
Workflow Control
How do I pause a running workflow?
Click the pause button in the chat header or press Cmd+P. The workflow stops at its current point with all state preserved. Send a new message or click resume to continue.
Can I change parameters while a workflow is running?
Yes. Open the parameters panel (click the gear icon in the chat header) and change values. Changes are signaled to the running workflow and take effect on the next iteration. Common updates: switching mode from agent to manual, adjusting temperature, or changing the model.
What happens when a workflow fails?
Reliant automatically pauses the workflow instead of failing completely. You’ll see what went wrong in the chat. Fix the underlying issue (permissions, network, etc.) and click resume to retry the failed step.
Threads and Branching
How do I branch a conversation?
Click the branch icon on any message. A new chat is created with all messages up to that point. The branched chat becomes your new conversation—the original stays unchanged.
Can I switch workflows after branching?
Yes, but only before you send the first message. After branching, select a different workflow from the workflow selector, then send your message to start. Once the first message is sent, the workflow is locked.
How do I message a specific sub-agent thread?
Pause the workflow first, then select the target thread from the thread selector and send your message. The workflow resumes with your message added to that thread. You can only message sub-threads while paused—this prevents conflicts with agents actively writing.
What’s the difference between inherit, new(), and fork?
These are thread modes that control how child workflows interact with conversation history:
- inherit: Use the parent’s thread. Messages appear in the same conversation.
- new(): Start with an empty thread. The sub-workflow has no history.
- fork: Copy parent’s messages to a new thread. Sub-workflow sees history but writes separately.
Response Tools
What are response tools?
Response tools are synthetic tools that force the LLM to provide structured output. Instead of free-form text, the LLM “calls” a response tool with specific parameters you define.
response_tools:
- name: review_result
description: Report code review findings
parameters:
type: object
properties:
approved:
type: boolean
issues:
type: array
items:
type: string
required: [approved]When should I use response tools?
Use them when you need to branch on structured data. For example:
- Code review that needs a pass/fail decision
- Validation that reports specific issues
- Classification that picks from fixed options
The workflow can then use responseData() to extract the structured data and branch accordingly.
How do I access response tool data?
Execute the tool calls from the LLM response, then use the responseData() CEL function:
condition: responseData(nodes.execute.tool_results, 'review_result').approved == trueWorkflows and Presets
What’s the difference between a workflow and a preset?
A workflow defines the execution pattern—what steps run, in what order, with what logic. Workflows are YAML files that control agent loops, multi-agent coordination, and branching.
A preset is a bundle of parameter values that configures a workflow. Presets set things like model, temperature, tools, and system prompt. Select a preset to quickly configure a workflow for a specific task.
Can I use the spawn tool with any workflow?
Currently, spawn only supports builtin://agent as the target workflow. You select which preset configures the spawned agent. This limitation keeps spawn predictable—the parent knows the child follows the standard agent pattern.
Why can’t I change workflows mid-conversation?
Workflows define the execution structure—changing mid-run would be like swapping the engine while driving. If you need a different approach, branch from an earlier point and select the new workflow before sending your first message.
Data and Privacy
Where is my data stored?
All data stays on your machine:
- Conversations:
~/Library/Application Support/reliant(macOS) or~/.config/reliant(Linux) - API keys: System keychain (secure storage)
- No cloud sync, no telemetry
Only your messages and relevant code context go to your AI provider’s API.
Are my conversations sent to Reliant?
No. Reliant Labs has no servers receiving your data. Everything runs locally. The only external communication is with your chosen AI provider (Anthropic, OpenAI, etc.) for LLM inference.
Troubleshooting
The workflow seems stuck
- Check if it’s waiting for approval (manual mode)
- Check if it’s paused (look for pause indicator)
- Try canceling (
Cmd+.) and starting fresh - Check the console for errors (
Cmd+Option+I)
AI responses are slow
- Check your API provider’s status page
- Try a different model (smaller models are faster)
- Check your rate limits haven’t been exceeded
- Ensure stable internet connection
More help: Troubleshooting • support@reliantlabs.com