Activities Reference

Activities Reference

Activities Reference

Activities are the building blocks of Reliant workflows. Each activity is a strongly-typed operation that performs a specific task—calling an LLM, executing tools, saving messages, or managing git operations.

This reference documents every activity’s inputs, outputs, and behavior.


CallLLM

Sends a prompt to a language model and streams back a response. This is the primary activity for interacting with LLMs in workflows.

The activity:

  • Loads conversation history from the specified thread
  • Applies system prompts and tool configurations
  • Streams the response in real-time
  • Returns tool calls if the model requests them

Inputs

FieldTypeRequiredDefaultDescription
threadstringYes-Thread path for conversation context. Must be a valid thread identifier.
modelstringNoFrom workflowModel ID to use (e.g., claude-4-sonnet, gpt-5).
temperaturefloatNoModel defaultSampling temperature (0.0-1.0). Lower values produce more deterministic output.
max_tokensintNoModel defaultMaximum tokens in the response.
thinking_levelstringNo-Extended thinking effort: low, medium, or high. Only supported by Claude models.
toolsboolNotrueWhether to enable tool use. Set to false for pure text generation.
tool_filterstring[]NoAll toolsList of tool names to enable. Empty means all available tools.
system_promptstringNo-Override the default system prompt. Used for ad-hoc LLM calls.
messagesobject[]No-Ephemeral messages to append to conversation (not persisted). See Message Injection.
response_toolsobject[]No-Custom response tools for structured output.
context_sequenceintNoAutoExplicit context sequence (typically set after compaction).

Outputs

FieldTypeDescription
message.rolestringAlways "assistant"
message.textstringThe text content of the response
response_textstringSame as message.text (for convenience)
tool_callsToolCall[]Array of tool calls requested by the model
input_tokensintNumber of input tokens consumed
output_tokensintNumber of output tokens generated
cache_creation_tokensintTokens written to prompt cache
cache_read_tokensintTokens read from prompt cache
thinkingobjectExtended thinking content and signature (if thinking_level was set)

ToolCall Structure

id: "toolu_01abc..."      # Unique identifier for this tool call
name: "bash"              # Name of the tool to execute
input: "{\"command\":...}" # JSON-encoded tool arguments

Example

- id: ask_llm
  action: CallLLM
  inputs:
    thread: "{{thread}}"
    model: "claude-4-sonnet"
    temperature: 0.7
    tools: true

Example with Extended Thinking

- id: deep_reasoning
  action: CallLLM
  inputs:
    thread: "{{thread}}"
    model: "claude-4-opus"
    thinking_level: high
    tools: false

Message Injection

The messages field allows injecting ephemeral messages into the conversation without persisting them to the database. This is useful for ad-hoc LLM calls like filtering or summarization.

- id: filter_results
  action: CallLLM
  inputs:
    thread: "{{thread}}"
    system_prompt: "You are a result filter. Return only relevant items."
    messages:
      - role: user
        content: "Filter these results: {{results}}"

SaveMessage

Saves a message to the conversation thread. This activity handles all message types: user input, assistant responses, tool results, and system messages.

The activity:

  • Validates message structure based on role
  • Assigns ordinal position within the thread
  • Tracks token counts for context management
  • Creates content blocks (text, images, tool calls, etc.)

Inputs

FieldTypeRequiredDefaultDescription
threadstringYes-Thread path to save the message to
rolestringYes-Message role: user, assistant, tool, or system
contentstringConditional-Text content. Required for user messages (unless attachments provided).
attachmentsstring[]No-Array of attachment IDs for images or files
tool_callsToolCall[]No-Tool calls from an assistant message
tool_resultsToolResult[]Conditional-Tool results. Required for tool messages.
input_tokensintNo0Input tokens (for assistant messages)
output_tokensintNo0Output tokens (for assistant messages)
cache_creation_tokensintNo0Cache write tokens
cache_read_tokensintNo0Cache read tokens
context_sequenceintNoAutoExplicit context sequence (set after compaction)
thinkingobjectNo-Extended thinking content to save

Outputs

FieldTypeDescription
message.rolestringThe saved message’s role
message.textstringThe saved message’s text content
message_idstringUnique identifier of the created message
threadstringThread the message was saved to (for chaining)
tool_callsToolCall[]Pass-through of tool calls (for routing)
tool_resultsToolResult[]Pass-through of tool results (for routing)
thread_token_countintTotal tokens in the thread (for compaction decisions)
message_countintTotal messages in the thread
context_sequenceintCurrent context sequence number

Example: Save User Message

- id: save_user_input
  action: SaveMessage
  inputs:
    thread: "{{thread}}"
    role: user
    content: "{{user_prompt}}"

Example: Save Assistant Response

- id: save_response
  action: SaveMessage
  inputs:
    thread: "{{thread}}"
    role: assistant
    content: "{{nodes.ask_llm.outputs.response_text}}"
    tool_calls: "{{nodes.ask_llm.outputs.tool_calls}}"
    input_tokens: "{{nodes.ask_llm.outputs.input_tokens}}"
    output_tokens: "{{nodes.ask_llm.outputs.output_tokens}}"

Example: Save Tool Results

- id: save_tool_results
  action: SaveMessage
  inputs:
    thread: "{{thread}}"
    role: tool
    tool_results: "{{nodes.execute_tools.outputs.tool_results}}"

ExecuteTools

Executes tool calls requested by the LLM. Tools run in parallel (up to 10 concurrent executions) for performance.

The activity:

  • Validates each tool call’s JSON input
  • Loads project context (working directory, worktree path)
  • Executes tools with proper timeout handling
  • Emits status updates for UI feedback
  • Returns structured results for saving to the thread

Inputs

FieldTypeRequiredDefaultDescription
threadstringYes-Thread context for tool execution
tool_callsToolCall[]Yes-Array of tool calls to execute

Outputs

FieldTypeDescription
message.rolestringAlways "tool"
message.textstringSummary of tool results
tool_resultsToolResult[]Array of execution results
thread_token_countintCurrent token count (for compaction decisions)
total_result_charsintTotal characters across all results

ToolResult Structure

tool_call_id: "toolu_01abc..."  # Matches the ToolCall.id
name: "bash"                     # Tool that was executed
content: "output here..."        # Tool output (stdout for bash, etc.)
metadata: "{...}"                # Optional JSON metadata
is_error: false                  # Whether execution failed

Example

- id: run_tools
  action: ExecuteTools
  inputs:
    thread: "{{thread}}"
    tool_calls: "{{nodes.ask_llm.outputs.tool_calls}}"

Parallelism

ExecuteTools runs up to 10 tool calls concurrently. This is capped to prevent resource exhaustion from runaway LLM responses. Each tool execution:

  • Has independent error handling (one failure doesn’t stop others)
  • Recovers from panics gracefully
  • Respects context cancellation

Approval

Pauses the workflow and waits for user approval before continuing. Creates an approval record visible in the UI and polls for resolution.

The activity:

  • Creates a persistent approval record in the database
  • Updates chat state to “needs attention” (triggers UI notification)
  • Polls for user response (approve/deny)
  • Supports configurable timeout

Inputs

FieldTypeRequiredDefaultDescription
titlestringYes-Approval prompt title shown in UI
descriptionstringNo-Detailed description of what’s being approved
timeoutstringNo"1h"Duration to wait before timing out (e.g., "30m", "2h")
actionsobject[]No-Custom action buttons (type: approve/deny/modify, label)
metadataobjectNo-Additional context stored with the approval

Outputs

FieldTypeDescription
approval_idstringUnique identifier for this approval
statusstringResolution status: approved, denied, or timeout
dataobjectApproval data including title, description, and any denial reason

Example

- id: confirm_deploy
  action: Approval
  inputs:
    title: "Deploy to Production"
    description: "This will deploy version {{version}} to production servers."
    timeout: "30m"

Example with Custom Actions

- id: review_changes
  action: Approval
  inputs:
    title: "Review Code Changes"
    description: "{{changes_summary}}"
    actions:
      - type: approve
        label: "Approve & Merge"
      - type: deny
        label: "Request Changes"
      - type: modify
        label: "Edit Before Merge"

Conditional Branching

Use approval status to control workflow flow:

- id: check_approval
  if: "{{nodes.confirm_deploy.outputs.status == 'approved'}}"
  action: RunStep
  inputs:
    command: "deploy.sh"

Compact

Compresses conversation context when approaching token limits. Generates an LLM-powered summary of the conversation history and starts a new context sequence.

The activity:

  • Loads all messages in the current context sequence
  • Generates a comprehensive summary using a capable model
  • Creates a system message with the summary in a new context sequence
  • Preserves important details: files modified, errors encountered, pending tasks

Inputs

FieldTypeRequiredDefaultDescription
threadstringYes-Thread to compact
session_idstringNo-Session identifier (preserved through compaction)
triggering_messagestringNo-Message ID that triggered compaction (moved to new context)

Outputs

FieldTypeDescription
message.rolestringAlways "system"
message.textstringGenerated conversation summary
new_session_idstringSession ID (unchanged)
threadstringThread path (unchanged)
context_sequenceintNew context sequence number (incremented)

Example

- id: compress_context
  action: Compact
  inputs:
    thread: "{{thread}}"
    session_id: "{{session_id}}"

Summary Structure

The generated summary follows a structured format:

  1. Summary - High-level overview
  2. Key Points - Important concepts and decisions
  3. Errors and Fixes - Problems encountered and solutions
  4. Problem Solving - Ongoing troubleshooting
  5. User Messages - All user requests (preserved verbatim)
  6. Pending Tasks - Work remaining
  7. Current Work - What was being done
  8. Next Step - Suggested continuation

Triggering Compaction

Compaction is typically triggered when thread_token_count exceeds a threshold (default: 160,000 tokens, which is 80% of a 200k context window):

- id: check_context_size
  if: "{{nodes.save_response.outputs.thread_token_count > 160000}}"
  action: Compact
  inputs:
    thread: "{{thread}}"

RunStep (ExecuteRunStep)

Executes a shell command in the project’s working directory. This is the activity behind the run: shorthand in workflows.

The activity:

  • Resolves working directory from chat → project → worktree
  • Executes commands in a stateless shell (no environment persistence)
  • Captures stdout, stderr, and exit code
  • Supports timeout and interrupt handling

Inputs

FieldTypeRequiredDefaultDescription
commandstringYes-Shell command to execute
timeoutintNo300000Timeout in milliseconds (default: 5 minutes)

Outputs

FieldTypeDescription
stdoutstringStandard output from the command
stderrstringStandard error from the command
outputstringCombined stdout + stderr
exit_codeintProcess exit code (0 = success)
interruptedboolWhether execution was interrupted (timeout/cancel)
durationintExecution time in milliseconds
working_dirstringDirectory where command executed
worktree_idstringWorktree ID if used
worktree_pathstringWorktree path if used

Example: Using Action

- id: run_tests
  action: ExecuteRunStep
  inputs:
    command: "npm test"
    timeout: 600000  # 10 minutes

Example: Using Shorthand

The run: shorthand is more concise for simple commands:

- id: run_tests
  run: npm test

Shell Behavior

  • Commands run in the user’s default shell ($SHELL or /bin/bash)
  • For zsh: sources ~/.zshrc before execution
  • For bash: uses login shell (-l flag)
  • Environment variables do NOT persist between steps
  • Working directory is always the project root or worktree path

Idempotency

Shell commands are NOT re-executed on activity retry. If Temporal retries the activity (due to worker failure), it returns an error rather than re-running the command. This prevents unintended side effects from duplicate execution.


CreateWorktree

Creates a new git worktree for isolated development. Worktrees allow parallel work on the same repository without branch switching.

Inputs

FieldTypeRequiredDefaultDescription
namestringYes-Name for the worktree (used in path)
branchstringNoAuto-generatedBranch name for the worktree
base_branchstringNoDefault branchBranch to base the new worktree on
copy_filesstring[]No-Files to copy from source repo (e.g., .env)
forceboolNofalseRecreate if worktree already exists
session_idstringNoChat IDSession to associate with worktree

Outputs

FieldTypeDescription
idstringUnique worktree identifier
namestringWorktree name
pathstringAbsolute filesystem path
branchstringGit branch name
base_branchstringBranch it was created from
repo_idstringRepository identifier
statusstringWorktree status

Example

- id: create_feature_worktree
  action: CreateWorktree
  inputs:
    name: "feature-auth"
    base_branch: "main"
    copy_files:
      - ".env"
      - ".env.local"

File Copying

The copy_files parameter searches recursively for matching files. Directory structure is preserved:

  • Source: frontend/.env → Worktree: <worktree_path>/frontend/.env

GitCommit

Commits staged changes to git with a commit message.

Inputs

FieldTypeRequiredDefaultDescription
messagestringYes-Commit message
filesstring[]NoAll changesSpecific files to commit (defaults to git add -A)

Outputs

FieldTypeDescription
commit_hashstringSHA of the created commit
successboolWhether commit succeeded
errorstringError message if failed

Example

- id: commit_changes
  action: GitCommit
  inputs:
    message: "feat: add user authentication"

Example: Specific Files

- id: commit_config
  action: GitCommit
  inputs:
    message: "chore: update configuration"
    files:
      - "config/settings.yaml"
      - "config/defaults.json"

Behavior

  • Automatically stages files before committing
  • Returns success: true with empty commit_hash if “nothing to commit”
  • Does NOT re-execute on retry (idempotency protection)

Common Types

ToolCall

Represents a tool invocation request from the LLM.

id: string          # Unique identifier (e.g., "toolu_01abc123...")
name: string        # Tool name (e.g., "bash", "edit", "grep")
input: string       # JSON-encoded arguments

ToolResult

Represents the result of executing a tool.

tool_call_id: string  # Matches the originating ToolCall.id
name: string          # Tool that was executed
content: string       # Output content
metadata: string      # Optional JSON metadata
is_error: bool        # True if execution failed

MessageOutput

Standardized message format returned by message-producing activities.

role: string   # "user", "assistant", "tool", or "system"
text: string   # Message content

Error Handling

Activities handle errors in two ways:

Activity Errors

If an activity returns an error, Temporal may retry the activity (depending on retry policy). Some activities like RunStep and GitCommit refuse to re-execute on retry for safety.

Soft Errors

Some activities return success with error information in the output:

  • GitCommit returns success: false with error field
  • ExecuteTools returns ToolResult.is_error: true for failed tools

Example: Handling Soft Errors

- id: commit
  action: GitCommit
  inputs:
    message: "Update"

- id: handle_failure
  if: "{{!nodes.commit.outputs.success}}"
  action: SaveMessage
  inputs:
    thread: "{{thread}}"
    role: system
    content: "Commit failed: {{nodes.commit.outputs.error}}"

Activity Categories

Activities are grouped by category for organization:

CategoryActivities
Message ProcessingCallLLM, SaveMessage, Compact
Tool ExecutionExecuteTools
Run StepExecuteRunStep
ApprovalApproval
GitGitCommit, CreateWorktree

See Also