Skip to main content
This reference lists all available tools organized by category.

Tool Tags

Tools are organized by tags for filtering:
TagDescription
tag:readonlyRead-only tools (safe for planning mode)
tag:fileFile operations
tag:searchSearch operations
tag:executionCommand execution
tag:webWeb operations
tag:planningPlanning and task management tools
tag:analysisAnalysis tools
tag:workflowWorkflow builder tools
tag:mcpAll MCP tools
tag:defaultDefault toolset (commonly used tools)

Categories


Planning & Task Management

Tools for creating plans, managing tasks, and tracking progress.
ToolTagsDescription
add_dependencyplanning, plan, defaultCreate a dependency between two tasks in the current plan.
add_taskplanning, plan, defaultAdd a new task to the current plan. This is your primary tool for dynamic planning and sub-planning.
bash_listexecution, readonly, plan, defaultLists background processes in the current workspace.
bash_outputexecution, readonly, plan, defaultRetrieves output from a background process with pagination and regex filtering support.
create_planplanning, plan, defaultCreate a comprehensive plan with tasks for implementing a feature or solving a problem.
create_subtaskplanning, planCreate a subtask under an existing task.
fetchweb, readonly, plan, defaultFetches content from a URL and returns it in the specified format.
get_planplanning, readonly, plan, defaultRetrieve the current plan for this session.
globsearch, readonly, plan, default- Fast file pattern matching tool that works with any codebase size
grepsearch, readonly, plan, defaultA powerful search tool built on ripgrep.
layout_libraryreadonly, planLayout library tool that provides pre-built, accessible, and responsive HTML/CSS layout templates…
list_ready_tasksplanning, readonly, plan, defaultList tasks that are ready to work on — no unresolved blockers.
list_tasksplanning, readonly, plan, defaultList all tasks for the current plan.
project_analyzeranalysis, readonly, planAnalyzes project structure, detects languages, build systems, and test frameworks
remove_dependencyplanning, plan, defaultRemove a dependency between two tasks.
sourcegraphanalysis, readonly, planSearch code across public repositories using Sourcegraph’s GraphQL API.
update_planplanning, plan, defaultUpdate an existing plan’s details or status.
update_taskplanning, plan, defaultUpdate a task’s status, details, or metadata.
viewfile, readonly, plan, defaultFile viewing tool that reads and displays the contents of files with line numbers, allowing you t…
websearchweb, readonly, plan, defaultSearch the web using DuckDuckGo’s HTML search.

add_dependency

Tags: planning, plan, default Create a dependency between two tasks in the current plan. DEPENDENCY TYPES:
  • blocks: from_task must complete before to_task can start
  • related: informational link, no execution constraint
  • parallel_with: explicitly marks tasks as parallelizable
EXAMPLES:
  • Task A blocks Task B: add_dependency(from_task=“A-id”, to_task=“B-id”, type=“blocks”) Means B cannot start until A completes.
  • Tasks can run together: add_dependency(from_task=“A-id”, to_task=“B-id”, type=“parallel_with”) Explicitly marks A and B as safe to run in parallel.
  • Informational link: add_dependency(from_task=“A-id”, to_task=“B-id”, type=“related”) No execution constraint, just documents a relationship.
USE WITH list_ready_tasks: After adding ‘blocks’ dependencies, use list_ready_tasks to see which tasks have no unresolved blockers and are ready to work on.

add_task

Tags: planning, plan, default Add a new task to the current plan. This is your primary tool for dynamic planning and sub-planning. WHEN TO USE:
  • When you discover additional work that needs to be done
  • When you encounter missing dependencies or prerequisites
  • When breaking down complex work into more steps
  • When pivoting approach requires new tasks
SUB-PLANNING WITH parent_id:
  • Use parent_id to create subtasks under an existing task when you discover complexity
  • Break down tasks that prove more complex than initially planned
  • Create hierarchical task structures for better organization
  • Each subtask inherits context from parent but can have specialized metadata
COMPLEXITY DISCOVERY PATTERNS:
  • Implementation agent finds task needs research → add subtask with preferred_agent: “research”
  • Research agent discovers multiple integration points → add subtasks for each integration
  • Any agent finds unfamiliar tech/patterns → add subtask with tool_hints: [“search_first”, “use_subagent”]
  • Task requires multiple phases → add sequential subtasks with position ordering
METADATA OPTIONS:
  • preferred_agent: Which agent should handle this (planning/research/implementation/debugging/tdd/finalize)
  • tool_hints: Suggested tools to use [“use_bash”, “use_subagent”, “search_first”, “test_first”]
  • dependencies: What this task depends on (packages, files, other tasks)
  • notes: Important context or discoveries
  • priority: high/medium/low
SUB-PLANNING EXAMPLES:
  1. Complex Implementation Discovery: parent_id: “task_123”, title: “Research authentication patterns”, preferred_agent: “research”, notes: “Found unfamiliar OAuth flow”
  2. Multi-Step Breakdown: parent_id: “task_456”, title: “Setup database schema”, position: 1 parent_id: “task_456”, title: “Create migration scripts”, position: 2
  3. Cross-Agent Coordination: parent_id: “task_789”, title: “Write integration tests”, preferred_agent: “tdd”, dependencies: [“API endpoints complete”]
BEST PRACTICES:
  • Add tasks as soon as you discover they’re needed
  • Use parent_id when expanding existing tasks that prove complex
  • Include metadata hints for better execution
  • Position subtasks logically in sequence
  • Use descriptive titles and comprehensive descriptions
  • Create subtasks for different agent specializations when needed

bash_list

Tags: execution, readonly, plan, default Lists background processes in the current workspace. WORKSPACE SCOPING:
  • Processes are scoped to the current workspace (worktree)
  • Multiple chats in the same workspace share the same process list
  • This enables coordination: one chat can start a server, another can check its status
  • Use BashOutput to view output and BashKill to terminate any workspace process
Usage notes:
  • By default, shows only running processes in the current workspace
  • Use ‘all: true’ to include completed, failed, and killed processes
  • Process IDs can be used with BashOutput and BashKill tools
Example outputs:
  • Running processes: Shows ID, command, and how long they’ve been running
  • Completed processes: Shows ID, command, exit code, and duration
  • Failed processes: Shows ID, command, exit code, and error indication
Examples:
  1. List running processes: bash_list()
  2. List all processes including completed: bash_list(all=true)

bash_output

Tags: execution, readonly, plan, default Retrieves output from a background process with pagination and regex filtering support. WORKSPACE SCOPING:
  • Can read output from any process in the current workspace, regardless of which chat started it
  • Multiple chats in the same workspace share process visibility
  • This enables monitoring: check on servers or builds started by other chats
This tool allows you to check the stdout and stderr output of a process running in the background, with support for reading in chunks to handle large outputs efficiently and filtering with regex. Usage notes:
  • Process IDs are provided when you start a background process with run_in_background: true
  • The tool will indicate if the process is still running or has completed
  • If the process has completed, the exit code will be provided
  • Output is not cleared after reading - you can re-read from any position
MODES OF OPERATION:
  1. Standard Pagination (default):
    • offset: Start reading from byte N (default: 0)
    • limit: Read up to N bytes (default: 16000)
    • Can be combined: offset + limit
  2. Tail Mode:
    • tail: Get last N lines
    • Cannot be combined with: regex, offset, limit
  3. Regex Filter Mode:
    • regex: Filter output to lines matching pattern
    • When set, tool filters FIRST, then applies offset/limit to filtered results
    • Can be combined with: offset, limit, regex_case_insensitive, regex_context_before, regex_context_after
    • Cannot be combined with: tail
    • Optional parameters:
      • regex_case_insensitive: Case-insensitive matching
      • regex_context_before: Include N lines before match (like grep -B)
      • regex_context_after: Include N lines after match (like grep -A)
PARAMETER COMPATIBILITY: Valid combinations:
  • offset + limit (standard pagination)
  • tail (alone)
  • regex (alone)
  • regex + offset + limit (filtered pagination)
  • regex + regex_case_insensitive + regex_context_before + regex_context_after
Invalid combinations (will error):
  • tail + regex
  • tail + offset
  • tail + limit
  • regex_case_insensitive without regex
  • regex_context_before/after without regex
Examples:
  1. Start a background process: bash(command=“npm run dev”, run_in_background=true)
  2. Get first chunk: bash_output(process_id=“<id>”)
  3. Get next chunk: bash_output(process_id=“<id>”, offset=16000)
  4. Get last 100 lines: bash_output(process_id=“<id>”, tail=100)
  5. Filter for errors: bash_output(process_id=“<id>”, regex=“ERROR|FATAL”)
  6. Filter with context: bash_output(process_id=“<id>”, regex=“ERROR”, regex_context_after=3)
  7. Filter and paginate: bash_output(process_id=“<id>”, regex=“WARN”, offset=0, limit=10000)
The response includes metadata:
  • has_more: true if more output is available
  • next_offset: where to start reading for the next chunk
  • total_available: total bytes available in the (filtered or original) output
  • filter_applied: true if regex was used
  • total_matches: number of matching lines (when filtered)
  • matches_in_response: number of matches in this chunk

create_plan

Tags: planning, plan, default Create a comprehensive plan with tasks for implementing a feature or solving a problem. WHEN TO USE:
  • AFTER you preform your initial research and analyze the problem.
  • Use this tool when you need to organize complex work into structured steps
  • Only available in the ‘research’ state
  • You typically should create plans AFTER your findings. Avoid creating tasks to research, explore, identify, or search through the codebase. You should first perform your research so you can create an informed plan.
PLAN STRUCTURE:
  • Title: Clear, concise title for the plan
  • Description: Detailed description including:
    • Main objective
    • Approach/strategy
    • Alternative approaches (if applicable)
    • Success criteria
  • Complexity: simple|moderate|complex
  • Tasks: List of tasks with title, description, optional metadata, and optional dependencies
  • The plan will be associated with the current session
INLINE DEPENDENCIES: You can specify dependencies between tasks at creation time using 1-indexed task positions. Each task can have a “dependencies” array where each entry specifies:
  • task_position: The 1-indexed position of another task in the tasks array
  • type: “blocks” (must complete first), “related” (informational), or “parallel_with” (safe to run together)
The dependency means: “the task at task_position has this relationship TO the current task.” For example, if task 3 has dependencies: [{task_position: 1, type: "blocks"}], it means task 1 blocks task 3. Example with dependencies:
tasks: [
{title: "Design schema"},
{title: "Write migrations", dependencies: [{task_position: 1, type: "blocks"}]},
{title: "Implement API", dependencies: [{task_position: 2, type: "blocks"}]},
{title: "Write tests", dependencies: [{task_position: 3, type: "blocks"}]}
]
TASK METADATA: Each task can optionally include metadata with agent hints:
  • preferred_agent: Which agent should handle this task
  • tool_hints: Suggested tools to use
  • dependencies: Informational dependency notes (free-form text)
  • notes: Important context
  • priority: high/medium/low
BEST PRACTICES:
  • Break down work into clear, actionable tasks
  • Order tasks logically
  • Use inline dependencies to define the task graph upfront
  • Include a mini-roadmap in the description
  • Document alternative approaches for pivoting
  • Be specific about what needs to be done
  • Consider edge cases and potential blockers
  • Consider changing state in parallel with plan creation, if states are available.

create_subtask

Tags: planning, plan Create a subtask under an existing task. WHEN TO USE:
  • When breaking down a complex task into smaller steps
  • To add more granular tracking
  • When discovering additional work while implementing
BEST PRACTICES:
  • Keep subtasks focused and specific
  • Use subtasks for logical groupings of work
  • Don’t create too many levels of nesting

fetch

Tags: web, readonly, plan, default Fetches content from a URL and returns it in the specified format. Uses Mozilla Readability to automatically extract main page content, stripping navigation, footers, ads, and other chrome. Returns only the readable content for text and markdown formats. WHEN TO USE THIS TOOL:
  • Use when you need to download content from a URL
  • Helpful for retrieving documentation, API responses, or web content
  • Useful for getting external information to assist with tasks
HOW TO USE:
  • Provide the URL to fetch content from
  • Specify the desired output format (text, markdown, or html)
  • Optionally set a timeout for the request
FEATURES:
  • Automatic content extraction using Mozilla Readability (strips nav, ads, footers)
  • Supports three output formats: text, markdown, and html
  • Automatically handles HTTP redirects
  • Detects likely JavaScript-rendered pages and warns you
  • Sets reasonable timeouts to prevent hanging
PARAMETERS:
  • max_size: Maximum bytes to fetch (default: 16000, ~16KB) Prevents downloading huge files that could overwhelm context
IMPORTANT LIMITATIONS:
  • Cannot render JavaScript. Single-page apps (SPAs) will return little or no content. The response metadata will include possible_js_rendered=true when this is detected. For JS-heavy sites, consider using browser tools instead.
  • Default maximum response size is 16KB (use max_size to adjust)
  • Only supports HTTP and HTTPS protocols
  • Cannot handle authentication or cookies
  • Some websites may block automated requests
TIPS FOR BETTER RESULTS:
  • For GitHub repos, use raw.githubusercontent.com URLs instead of github.com (e.g., https://raw.githubusercontent.com/org/repo/main/README.md)
  • For API docs that are JS-rendered SPAs, look for the OpenAPI/Swagger JSON spec URL instead
  • Use text or markdown format for documentation (html returns raw markup with all chrome)
  • If the response says possible_js_rendered=true, the page needs JavaScript to render. Try finding an alternative URL, a raw content source, or use browser tools.
  • Adjust max_size for larger documents (but consider context limits)
RESPONSE METADATA:
  • content_length: Size of the extracted content
  • raw_html_size: Size of the original HTML before extraction (for HTML pages)
  • truncated: Whether content was truncated to fit max_size
  • encoding_used: The format that was applied
  • page_title: Page title extracted by Readability (when available)
  • possible_js_rendered: True if the page appears to be JavaScript-rendered (very little content extracted)
  • used_readability: True if Readability content extraction was applied

get_plan

Tags: planning, readonly, plan, default Retrieve the current plan for this session. WHEN TO USE:
  • When you need to review the current plan
  • To check plan status and progress
  • To understand what needs to be done
RETURNS:
  • Plan details including title, description, status, and complexity
  • Returns error if no plan exists for the session

glob

Tags: search, readonly, plan, default
  • Fast file pattern matching tool that works with any codebase size
  • Supports glob patterns like ”/*.js” or “src//*.ts”
  • Returns matching file paths sorted by modification time
  • Use this tool when you need to find files by name patterns
  • When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead
  • You have the capability to call multiple tools in a single response. It is always better to speculatively perform multiple searches as a batch that are potentially useful.
IGNORED DIRECTORIES: By default, the following noisy directories are excluded from results:
  • Dependencies: node_modules, vendor, bower_components, jspm_packages
  • Build outputs: dist, build, target, bin, obj, out, generated
  • Cache/temp: pycache, coverage, tmp, temp, logs
  • Internal: .git, .reliant
Hidden files/directories (starting with .) like .github, .vscode are INCLUDED by default. Gitignored files are INCLUDED by default (we don’t respect .gitignore). Use include_ignored=true to also search the noisy directories listed above.

grep

Tags: search, readonly, plan, default A powerful search tool built on ripgrep. ALWAYS use Grep for search tasks. NEVER use grep or rg via Bash. OUTPUT MODES:
  • “files_with_matches” (default): Returns file paths sorted by modification time
  • “content”: Shows matching lines with line numbers
  • “count”: Shows match counts per file
KEY PARAMETERS:
  • pattern: Regex pattern (use fixed_strings=true for literal matching)
  • glob: Filter files (e.g., “.js”, ”**/.tsx”)
  • type: File type filter (e.g., “js”, “ts”, “py”, “go”)
  • word_boundary: Match whole words only (e.g., “foo” won’t match “foobar”)
  • fixed_strings: Treat pattern as literal text, no regex escaping needed
  • head_limit: Limit number of results
  • -C/-A/-B: Context lines (content mode only)
  • include_ignored: Search commonly ignored directories
IGNORED DIRECTORIES: By default, the following noisy directories are excluded from results:
  • Dependencies: node_modules, vendor, bower_components, jspm_packages
  • Build outputs: dist, build, target, bin, obj, out, generated
  • Cache/temp: pycache, coverage, tmp, temp, logs
  • Internal: .git, .reliant
Hidden files/directories (starting with .) like .github, .vscode are INCLUDED by default. Use include_ignored=true to also search the noisy directories listed above. REGEX SYNTAX: Patterns use ripgrep regex syntax (Rust regex / ERE-style). This is NOT the same as GNU grep (BRE).
  • Alternation: | (NOT |)
  • Grouping: () (NOT ( ))
  • Quantifiers: +, ?, {n,m} work without escaping
  • Character classes: [a-z], \d, \w, \s work as expected
COMMON MISTAKES:
  • WRONG: pattern=“foo|bar” — | matches a literal backslash followed by pipe, NOT alternation RIGHT: pattern=“foo|bar” — | is alternation in ripgrep
  • WRONG: pattern=“(group)” — ( matches a literal parenthesis RIGHT: pattern=“(group)” — () is grouping in ripgrep
  • WRONG: pattern=“foo+” — + matches a literal plus RIGHT: pattern=“foo+” — + means one-or-more in ripgrep
  • To match literal special characters, use fixed_strings=true instead of backslash escaping
EXAMPLES:
  • Find function definitions: pattern=“func\s+\w+”, type=“go”
  • Find exact text with special chars: pattern=“interface{}”, fixed_strings=true
  • Find whole word: pattern=“Error”, word_boundary=true
  • Multiline patterns: pattern=“struct {[\s\S]*?}”, multiline=true
  • Search in node_modules: pattern=“lodash”, include_ignored=true
  • Alternation (match any of several): pattern=“foo|bar|baz”
  • Grouped alternation: pattern=“(get|set)Value”

layout_library

Tags: readonly, plan Layout library tool that provides pre-built, accessible, and responsive HTML/CSS layout templates for UX design. WHEN TO USE:
  • Creating new UI layouts quickly
  • Getting responsive, accessible layout templates
  • Prototyping user interfaces
  • Establishing consistent layout patterns
HOW TO USE:
  • action: “list” - Get all available layouts with descriptions
  • action: “get” with layout name - Retrieve specific layout HTML/CSS
FEATURES:
  • 10 pre-built responsive layouts
  • Accessibility-first design
  • Mobile-responsive
  • Semantic HTML structure

list_ready_tasks

Tags: planning, readonly, plan, default List tasks that are ready to work on — no unresolved blockers. A task is “ready” when:
  1. Its status is “pending” (not started yet)
  2. All tasks that block it (via ‘blocks’ dependencies) have status “completed”
This is the deterministic way for agents to know what to pick up next. Tasks with no blocking dependencies are always ready (if pending). RETURNS:
  • List of ready tasks with their details
  • Total count of ready tasks vs total pending

list_tasks

Tags: planning, readonly, plan, default List all tasks for the current plan. WHEN TO USE:
  • When you need to see all tasks in the plan
  • To check task progress and status
  • To understand what work needs to be done
RETURNS:
  • List of all tasks with their status, title, and hierarchy
  • Tasks are ordered by position and show parent-child relationships

project_analyzer

Tags: analysis, readonly, plan Analyzes project structure, detects languages, build systems, and test frameworks

remove_dependency

Tags: planning, plan, default Remove a dependency between two tasks. Specify from_task, to_task, and type to identify which dependency to remove.

sourcegraph

Tags: analysis, readonly, plan Search code across public repositories using Sourcegraph’s GraphQL API. WHEN TO USE THIS TOOL:
  • Use when you need to find code examples or implementations across public repositories
  • Helpful for researching how others have solved similar problems
  • Useful for discovering patterns and best practices in open source code
HOW TO USE:
  • Provide a search query using Sourcegraph’s query syntax
  • Optionally specify the number of results to return (default: 10)
  • Optionally set a timeout for the request
QUERY SYNTAX:
  • Basic search: “fmt.Println” searches for exact matches
  • File filters: “file:.go fmt.Println” limits to Go files
  • Repository filters: “repo:^github.com/golang/go$ fmt.Println” limits to specific repos
  • Language filters: “lang:go fmt.Println” limits to Go code
  • Boolean operators: “fmt.Println AND log.Fatal” for combined terms
  • Regular expressions: “fmt.(Print|Printf|Println)” for pattern matching
  • Quoted strings: ""exact phrase"" for exact phrase matching
  • Exclude filters: “-file:test” or “-repo:forks” to exclude matches
ADVANCED FILTERS:
  • Repository filters:
    • “repo:name” - Match repositories with name containing “name”
    • “repo:^github.com/org/repo$” - Exact repository match
    • “repo:org/repo@branch” - Search specific branch
    • “repo:org/repo rev:branch” - Alternative branch syntax
    • “-repo:name” - Exclude repositories
    • “fork:yes” or “fork:only” - Include or only show forks
    • “archived:yes” or “archived:only” - Include or only show archived repos
    • “visibility:public” or “visibility:private” - Filter by visibility
  • File filters:
    • “file:.js$” - Files with .js extension
    • “file:internal/” - Files in internal directory
    • “-file:test” - Exclude test files
    • “file:has.content(Copyright)” - Files containing “Copyright”
    • “file:has.contributor([email protected])” - Files with specific contributor
  • Content filters:
    • “content:“exact string"" - Search for exact string
    • “-content:“unwanted"" - Exclude files with unwanted content
    • “case:yes” - Case-sensitive search
  • Type filters:
    • “type:symbol” - Search for symbols (functions, classes, etc.)
    • “type:file” - Search file content only
    • “type:path” - Search filenames only
    • “type:diff” - Search code changes
    • “type:commit” - Search commit messages
  • Commit/diff search:
    • “after:“1 month ago"" - Commits after date
    • “before:“2023-01-01"" - Commits before date
    • “author:name” - Commits by author
    • “message:“fix bug"" - Commits with message
  • Result selection:
    • “select:repo” - Show only repository names
    • “select:file” - Show only file paths
    • “select:content” - Show only matching content
    • “select:symbol” - Show only matching symbols
  • Result control:
    • “count:100” - Return up to 100 results
    • “count:all” - Return all results
    • “timeout:30s” - Set search timeout
EXAMPLES:
  • “file:.go context.WithTimeout” - Find Go code using context.WithTimeout
  • “lang:typescript useState type:symbol” - Find TypeScript React useState hooks
  • “repo:^github.com/kubernetes/kubernetes$ pod list type:file” - Find Kubernetes files related to pod listing
  • “repo:sourcegraph/sourcegraph$ after:“3 months ago” type:diff database” - Recent changes to database code
  • “file:Dockerfile (alpine OR ubuntu) -content:alpine:latest” - Dockerfiles with specific base images
  • “repo:has.path(.py) file:requirements.txt tensorflow” - Python projects using TensorFlow
BOOLEAN OPERATORS:
  • “term1 AND term2” - Results containing both terms
  • “term1 OR term2” - Results containing either term
  • “term1 NOT term2” - Results with term1 but not term2
  • “term1 and (term2 or term3)” - Grouping with parentheses
LIMITATIONS:
  • Only searches public repositories
  • Rate limits may apply
  • Complex queries may take longer to execute
  • Maximum of 20 results per query
TIPS:
  • Use specific file extensions to narrow results
  • Add repo: filters for more targeted searches
  • Use type:symbol to find function/method definitions
  • Use type:file to find relevant files

update_plan

Tags: planning, plan, default Update an existing plan’s details or status. WHEN TO USE:
  • When you need to modify the plan based on new information
  • When pivoting to a different approach
  • When marking a plan as completed or cancelled
UPDATES ALLOWED:
  • Title: Update the plan title
  • Description: Add new information, document pivots
  • Status: pending|in_progress|completed|cancelled
  • Complexity: simple|moderate|complex
BEST PRACTICES:
  • Document why changes are being made
  • Keep the description updated with current approach
  • Use this to track progress and pivots

update_task

Tags: planning, plan, default Update a task’s status, details, or metadata. WHEN TO USE:
  • When starting work on a task (mark as in_progress)
  • When completing a task (mark as completed)
  • When a task is blocked or failed
  • To update task description with findings
  • To add notes, hints, or discoveries to metadata
  • To claim a task by setting assignee + in_progress
STATUS OPTIONS:
  • pending: Not started yet
  • in_progress: Currently working on it
  • completed: Successfully finished
  • failed: Could not complete
  • blocked: Waiting on something (add blocker to notes)
  • skipped: Decided not to do
  • cancelled: No longer needed
ASSIGNEE:
  • Free-form text identifying who is working on this task
  • Use a descriptive label: spawn title, role name, or agent identifier
  • Claim pattern: update_task(task_id=“X”, status=“in_progress”, assignee=“researcher-auth”)
  • Other agents see assignments in list_tasks and skip claimed work
METADATA OPTIONS:
  • notes: Add discoveries, blockers, or important context
  • preferred_agent: Suggest which agent should handle this
  • tool_hints: Suggest tools to use [“use_bash”, “search_first”]
  • dependencies: Document what this depends on
  • next_steps: What to do after this task
BEST PRACTICES:
  • Update status when you start and finish tasks
  • Add descriptions to document what was done
  • Use metadata.notes for blockers when marking as blocked
  • Add tool_hints for complex tasks to guide future execution
  • Set assignee when claiming a task to prevent duplicate work

view

Tags: file, readonly, plan, default File viewing tool that reads and displays the contents of files with line numbers, allowing you to examine code, logs, or text data. WHEN TO USE:
  • Reading contents of specific files (source code, configs, logs)
  • Examining text-based file formats
HOW TO USE:
  • Provide the file path
  • Optional: offset (starting line) and limit (number of lines)
  • Issue multiple view tools in a single request for improved performance
FEATURES:
  • Displays file contents with line numbers for easy reference
  • Can read from any position in a file using the offset parameter
  • Handles large files by limiting the number of lines read
  • Automatically truncates very long lines for better display
  • Suggests similar file names when the requested file isn’t found
LIMITATIONS:
  • Maximum output size is 16KB (~4K tokens) - larger files are truncated with head+tail
  • Default reading limit is 300 lines
  • Lines longer than 500 characters are truncated
  • Cannot display binary files or images
  • Images can be identified but not displayed
TIPS:
  • Use with Glob tool to first find files you want to view
  • For code exploration, first use Grep to find relevant files, then View to examine them
  • When viewing large files, use the offset parameter to read specific sections
  • If output is truncated, use offset to read the middle section

websearch

Tags: web, readonly, plan, default Search the web using DuckDuckGo’s HTML search. WHEN TO USE THIS TOOL:
  • Finding current information not available in the assistant’s training data
  • Researching documentation, tutorials, or examples online
  • Looking up error messages or debugging information
  • Finding libraries, tools, or frameworks
  • Checking current status of services or projects
  • Discovering recent developments or news about technologies
HOW TO USE:
  • Provide a search query as you would in a web browser
  • Optionally specify the number of results (default: 10, max: 20)
QUERY GUIDELINES:
  • Use simple, natural language queries with specific keywords
  • Keep queries short and focused (3-8 words works best)
  • Use minus (-) to exclude terms: python tutorial -django
  • Quotes work for simple exact phrases: “react hooks” tutorial
IMPORTANT - UNSUPPORTED QUERY SYNTAX: DuckDuckGo’s HTML search does NOT support these features (they will return zero results):
  • Boolean operators: OR, AND (e.g. “PUT” OR “POST” will FAIL)
  • Complex quoted phrase combinations (multiple quoted phrases with operators)
  • site: operator is unreliable and often returns no results
  • filetype: operator is unreliable If you need boolean-style searches, run multiple simple queries instead.
EXAMPLES OF GOOD QUERIES:
  • “golang context timeout example” - Simple keywords
  • “anthropic claude api documentation” - Natural language
  • “Customer.io transactional API” - Product + feature
  • “react hooks tutorial 2024” - Topic + timeframe
EXAMPLES OF BAD QUERIES (will return 0 results):
  • “PUT” OR “POST” OR “PATCH” email content - Boolean operators don’t work
  • site:customer.io/docs/api specific-page - site: is unreliable
  • “exact phrase 1” OR “exact phrase 2” - Complex boolean combos fail
RESEARCH STRATEGY:
  • Start with broad queries, then narrow based on results
  • If a search returns 0 results, SIMPLIFY the query - don’t add complexity
  • After 3-4 searches on the same topic, synthesize what you have rather than keep searching
  • For API documentation, search for the official SDK/client library on GitHub instead
  • For GitHub content, prefer fetching raw.githubusercontent.com URLs over github.com
RESPONSE FORMAT: Returns a list of search results with:
  • Title: The title of the search result
  • Description: A brief description/snippet from the page
  • URL: The link to the resource
LIMITATIONS:
  • Maximum of 20 results per query
  • May have rate limits if used excessively
  • DuckDuckGo HTML search has limited query syntax (see above)
  • Results may be less comprehensive than Google for niche technical queries
TIPS:
  • Start with fewer results (5-10) for faster responses
  • Use specific queries to get more relevant results
  • Combine with the fetch tool to retrieve full page content from results

File Operations

Tools for reading, writing, and modifying files.
ToolTagsDescription
editfile, defaultMake precise text replacements in files or create new files. All edit operations must be provided…
find_replacefile, defaultPerforms find and replace operations across multiple files matching a glob pattern.
move_codefile, defaultMove or copy a block of code from one location to another, within the same file or across files.
writefile, defaultFile writing tool that creates or updates files in the filesystem.

edit

Tags: file, default Make precise text replacements in files or create new files. All edit operations must be provided in the edits array. NOTE: You can parallelize multiple edit tool calls in one message, even if they involve the same files. WHEN TO USE:
  • Precise text replacements
  • Creating new files (empty old_string)
  • Deleting specific content (empty new_string)
  • Renaming variables/functions (with replace_all)
  • Coordinated changes across multiple files
WHEN NOT TO USE:
  • Complete file rewrite: Use Write tool
  • Moving/renaming files: Use Bash mv command
COMMON MISTAKES TO AVOID:
  • Insufficient context in old_string (needs 3-5 lines)
  • Forgetting whitespace/indentation must match exactly
  • Not checking if text appears multiple times
USAGE PATTERNS:

One Edit

edits: [
{
file_path: "/path/to/file.go",
old_string: "Include 3-5 lines before AND after",
new_string: "Your replacement text",
replace_all: false
}
]

Multiple Edits

edits: [
{
file_path: "/path/to/file1.go",
old_string: "old text 1",
new_string: "new text 1"
},
{
file_path: "/path/to/file2.go",
old_string: "old text 2",
new_string: "new text 2"
}
]

Create New File

edits: [
{
file_path: "/path/to/new/file.go",
old_string: "",
new_string: "file contents"
}
]

Delete Content

edits: [
{
file_path: "/path/to/file.go",
old_string: "text to remove",
new_string: ""
}
]

Rename Variable

edits: [
{
file_path: "/path/to/file.go",
old_string: "oldName",
new_string: "newName",
replace_all: true
}
]

CRITICAL REQUIREMENTS

  1. UNIQUENESS (when replace_all=false):
    • Include 3-5 lines of context BEFORE
    • Include 3-5 lines of context AFTER
    • Match whitespace/indentation EXACTLY
  2. VERIFICATION CHECKLIST: Check how many times text appears Include enough context for uniqueness Verify parent directories exist (new files)
  3. FAILURE CONDITIONS:
    • old_string not found → FAILS
    • Multiple matches (without replace_all) → FAILS
    • Whitespace mismatch → FAILS

BEST PRACTICES

  • Include ample context
  • All operations are atomic (all succeed or all fail)
  • Use replace_all for systematic renames
  • Verify edits don’t break code

WORKS WELL WITH

  • AFTER: Bash (test changes)
  • ALTERNATIVE: Write (complete rewrite)

PARAMETERS

  • edits: Array of edit operations (required)
    • file_path: Absolute path (required)
    • old_string: Text to find (exact match)
    • new_string: Replacement text
    • replace_all: Replace all occurrences (optional)
Remember: This tool requires EXACT text matching including all whitespace and indentation.

find_replace

Tags: file, default Performs find and replace operations across multiple files matching a glob pattern. WHEN TO USE:
  • Renaming variables, functions, or classes across multiple files
  • Updating imports or module references project-wide
  • Fixing consistent typos or naming conventions
  • Batch updating configuration values
  • Refactoring patterns across the codebase WHEN NOT TO USE:
  • Single file edits: Use Edit tool instead
  • Complex structural changes: Use Patch tool
  • Context-dependent replacements: Use Edit or Patch for precision FEATURES:
  • Glob pattern file filtering (e.g., ”**/*.js”, “src/**/*.{ts,tsx}”)
  • Regular expression support with capture groups
  • Case-insensitive matching option
  • Preview mode to see changes before applying
  • Automatic file history tracking USAGE PATTERNS:
Use preview=true to see what would change before committing. Preview mode does not require permission, making it ideal for scoping changes. find_pattern: “oldFunction” replace_text: “newFunction” file_glob: ”**/*.js” preview: true

Simple Text Replacement

find_pattern: “oldFunction” replace_text: “newFunction” file_glob: ”**/*.js”

Regex with Capture Groups

find_pattern: “import (.) from ‘old-module’” replace_text: “import $1 from ‘new-module’” use_regex: true file_glob: ”**/.ts”

Case-Insensitive Replacement

find_pattern: “TODO” replace_text: “FIXME” ignore_case: true file_glob: “**/*.{js,ts,jsx,tsx}

CRITICAL REQUIREMENTS

  1. PREVIEW FIRST:
    • ALWAYS call with preview=true first to verify the pattern matches and scope
    • Preview shows diffs of what would change without modifying any files
    • After reviewing the preview, call again without preview to apply
  2. FILE READING:
    • Checks modification times to prevent conflicts
    • Validates file access permissions
  3. PATTERN MATCHING:
    • Literal text matching by default
    • Regex patterns with use_regex=true
    • Case sensitivity controlled by ignore_case
  4. SAFETY CHECKS:
    • Atomic operation (all or nothing)
    • Preserves file history

BEST PRACTICES

  • ALWAYS use preview=true first to see what changes would be made
  • Use specific file globs to limit scope
  • Review the preview diffs carefully before applying
  • Test regex patterns with preview before committing

WORKS WELL WITH

  • BEFORE: preview=true (see changes first)
  • BEFORE: Grep (find occurrences)
  • BEFORE: View (read files)
  • AFTER: Bash (run tests)
  • ALTERNATIVE: Edit (single file)
  • ALTERNATIVE: Patch (complex multi-file edits)

PARAMETERS

  • find_pattern: Text or regex pattern to find (required)
  • replace_text: Replacement text (required)
  • file_glob: File filter pattern (optional, defaults to all files)
  • ignore_case: Case-insensitive matching (optional, default false)
  • use_regex: Treat pattern as regex (optional, default false)
  • preview: Preview mode - show what would change without applying (optional, default false) Remember: Use preview=true first, then apply.

move_code

Tags: file, default Move or copy a block of code from one location to another, within the same file or across files. WHEN TO USE:
  • Reorganizing code within a file
  • Moving a function/method to a different file
  • Extracting code into a new location
  • Copying code snippets between files
  • Reordering functions in a file
WHEN NOT TO USE:
  • For simple cut/paste of small text - use edit tool
  • For renaming across files - use find_replace tool
  • For complex multi-file refactors - consider using the refactor agent
HOW IT WORKS:
  1. Extracts lines source_start to source_end from source_file
  2. Inserts the extracted code AFTER target_line in target_file
  3. If operation is “move” (default), deletes the original lines from source
  4. If operation is “copy”, keeps the original lines
EXAMPLES:

Move function to end of file

{
"source_file": "/path/to/file.go",
"source_start": 50,
"source_end": 75,
"target_file": "/path/to/file.go",
"target_line": 200,
"operation": "move"
}

Copy code block to another file

{
"source_file": "/path/to/original.go",
"source_start": 10,
"source_end": 30,
"target_file": "/path/to/new_file.go",
"target_line": 15,
"operation": "copy"
}

Insert at beginning of file

{
"source_file": "/path/to/source.go",
"source_start": 100,
"source_end": 120,
"target_file": "/path/to/target.go",
"target_line": 0,
"operation": "move"
}
CRITICAL REQUIREMENTS:
  1. Line numbers are 1-indexed
  2. source_start and source_end are INCLUSIVE
  3. target_line = 0 inserts at the very beginning
  4. For same-file moves, the tool handles line number shifts automatically
BEST PRACTICES:
  • For same-file moves, be aware that line numbers shift after the operation
  • Add blank lines in the extracted code if needed for proper spacing
  • Check for any imports/dependencies that might need to be added to target file

write

Tags: file, default File writing tool that creates or updates files in the filesystem. WHEN TO USE:
  • Creating new files or updating existing files
  • Saving generated code, configurations, or text data
HOW TO USE:
  • Provide the file path and content to write
  • Parent directories are created automatically
FEATURES:
  • Creates new files or overwrites existing ones
  • Auto-creates parent directories
  • Checks for external modifications for safety
  • Avoids unnecessary writes when content unchanged
LIMITATIONS:
  • Cannot append (rewrites entire file)
  • Existing files must be read first (View tool) since Write replaces the entire file
TIPS:
  • Use the LS tool to verify the correct location when creating new files
  • Combine with Glob and Grep tools to find and modify multiple files
  • Always include descriptive comments when making changes to existing code

Workflow Management

Tools for managing and inspecting workflows, presets, and scenarios.
ToolTagsDescription
create_workflowworkflowCreate a new workflow draft.
delete_scenarioworkflowDelete a test scenario.
edit_scenarioworkflowMake precise text replacements in a scenario’s YAML definition.
edit_workflowworkflowMake precise text replacements in the workflow YAML.
get_cel_referenceworkflow, readonlyGets the CEL expression reference for workflow development.
get_presetworkflow, readonlyGets the full configuration of a preset.
get_schemaworkflow, readonlyLook up schema documentation for any workflow type by name.
get_workflowworkflow, readonlyGets the full YAML definition of a workflow draft.
get_workflow_suggestionsworkflow, readonlyReturns static design suggestions for building workflows.
list_presetsworkflow, readonlyLists available presets for agent nodes.
list_scenariosworkflow, readonlyList all test scenarios for the current workflow.
list_workflowsworkflow, readonlyLists all available workflows (builtin, project, and user-created).
run_scenarioworkflowRun an existing test scenario by name.
view_scenarioworkflow, readonlyView a specific test scenario’s full definition.
write_scenarioworkflowCreate or update a test scenario with YAML content.
write_workflowworkflowReplace an existing workflow draft with YAML content.

create_workflow

Tags: workflow Create a new workflow draft. Returns the draft UUID which you can then use with get_workflow, edit_workflow, and write_workflow. Parameters:
  • name: (optional) Workflow name. A random name is generated if omitted.
  • content: (optional) Complete workflow YAML. The default agent template is used if omitted.
Response: Returns JSON with id, name, and slug. Example — create with defaults:
{}
Example — create with name and content:
{
"name": "my-review-workflow",
"content": "name: my-review-workflow\nentry: [agent]\nnodes:\n  - id: agent\n    type: call_llm"
}

delete_scenario

Tags: workflow Delete a test scenario. Permanently removes the scenario from the workflow.

edit_scenario

Tags: workflow Make precise text replacements in a scenario’s YAML definition. Use this for small changes like updating expected values or modifying events. The old_string must match exactly (including whitespace and indentation). Example:
{
"name": "happy_path",
"old_string": "outcome: completed",
"new_string": "outcome: error"
}

edit_workflow

Tags: workflow Make precise text replacements in the workflow YAML. Use this for small changes like:
  • Adding or modifying a node
  • Updating an edge condition
  • Changing input parameters
The old_string must match exactly (including whitespace and indentation). Include enough context to ensure a unique match. Conflict Detection: If you provide expected_version (from get_workflow), the edit will fail if the workflow was modified since you last viewed it. Example:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"old_string": "  - id: agent\n    type: call_llm",
"new_string": "  - id: agent\n    type: call_llm\n    model: \"{{inputs.model}}\""
}

get_cel_reference

Tags: workflow, readonly Gets the CEL expression reference for workflow development. WHEN TO USE:
  • When writing conditions, args, or dynamic values in workflows
  • To understand available namespaces and their fields
  • To see available custom functions
RETURNS: Complete CEL reference including:
  • All namespaces (inputs, workflow, nodes, iter, output, outputs)
  • Field documentation for each namespace
  • Custom functions (parseJson, coalesce, etc.)
  • Common patterns and examples

get_preset

Tags: workflow, readonly Gets the full configuration of a preset. WHEN TO USE:
  • To view preset configurations
  • To understand what a preset provides
  • To copy and adapt preset settings
PARAMETERS:
  • name: The preset name (from list_presets)
RETURNS: The complete preset YAML configuration.

get_schema

Tags: workflow, readonly Look up schema documentation for any workflow type by name. WHEN TO USE:
  • When you see a field type like “thread: ThreadConfig” and need details
  • When you need to understand node output structure (e.g., CallLLMOutput)
  • To explore top-level types (Workflow, Edge)
  • To get full field documentation for any type
WHAT YOU CAN QUERY:
  • Node types: call_llm, loop, workflow, run, execute_tools, join, etc.
  • Input types: string, number, boolean, enum, model, message, etc.
  • Config types: ThreadConfig, SaveMessageConfig, ProjectConfig, ResponseTool, etc.
  • Output types: CallLLMOutput, ExecuteToolsOutput, RunOutput, LoopOutput, etc.
  • Top-level: Workflow, Edge, EdgeCase
Type detection is automatic - just provide the name. EXAMPLES:
  • get_schema(name=“call_llm”) // Node type
  • get_schema(name=“ThreadConfig”) // Config type
  • get_schema(name=“CallLLMOutput”) // Output structure
  • get_schema(name=“Workflow”) // Top-level workflow structure
  • get_schema(name=“Edge”) // Edge routing structure

get_workflow

Tags: workflow, readonly Gets the full YAML definition of a workflow draft. WHEN TO USE:
  • To view the current state of the workflow you’re editing
  • Before making edits to understand the structure
PARAMETERS:
  • id: (required) Workflow draft UUID
RETURNS: The complete workflow YAML definition with validation status, version, and timestamps. Use the version for conflict detection in edit_workflow/write_workflow.

get_workflow_suggestions

Tags: workflow, readonly Returns static design suggestions for building workflows. WHEN TO USE:
  • Before starting a new workflow design
  • When encountering complexity or unexpected behavior
  • To learn best practices for edges, joins, loops, and conditions
RETURNS: Markdown document with categorized suggestions covering:
  • Structure and organization
  • Edge routing patterns
  • Node vs edge conditions
  • Join behavior with conditional sources
  • Loop patterns and outputs
  • Testing strategies
Note: These are static suggestions. Future calls yield the same results.

list_presets

Tags: workflow, readonly Lists available presets for agent nodes. WHEN TO USE:
  • To discover available presets for agent configurations
  • To find the right preset for a specific task
  • Before using get_preset to view details
RETURNS: List of preset names with descriptions.

list_scenarios

Tags: workflow, readonly List all test scenarios for the current workflow. Returns a summary of each scenario including name, description, and last run status. Use this to see what scenarios exist and their current state. No parameters needed - the workflow is determined from the current chat context.

list_workflows

Tags: workflow, readonly Lists all available workflows (builtin, project, and user-created). WHEN TO USE:
  • To discover available workflows
  • To find workflow patterns for common use cases
  • Before using get_workflow to view details
RETURNS: List of workflow names with descriptions and source (builtin, project, or user).

run_scenario

Tags: workflow Run an existing test scenario by name. Executes the scenario against the current workflow and returns the results. Use this after making changes to verify scenarios still pass. Use list_scenarios to see available scenario names.

view_scenario

Tags: workflow, readonly View a specific test scenario’s full definition. Returns the complete scenario YAML including events, expectations, and last run results. Use this to examine a scenario’s configuration or debug test failures.

write_scenario

Tags: workflow Create or update a test scenario with YAML content. Creates or updates a scenario with the given YAML definition and runs it. Scenario YAML structure: name: scenario_name description: What this scenario tests events:
  • node: node_id # Optional: target specific node output: # Mock output for the node message: role: assistant text: “Hello!” response_text: “Hello!” expect: outcome: completed # or “error” reached: [“node1”, “node2”] not_reached: [“node3”]
Targeting nodes:
  • Top-level nodes: node: “call_llm”
  • Inner loop nodes: node: “agent_loop.call_llm” (dot-separated)
  • Nested loops: node: “outer_loop.inner_loop.call_llm”
Example:
{
"id": "workflow-uuid",
"name": "happy_path",
"content": "name: happy_path\ndescription: Test happy path\nevents:\n  - output:\n      message:\n        role: assistant\n        text: Hello!\n      response_text: Hello!\nexpect:\n  outcome: completed"
}

write_workflow

Tags: workflow Replace an existing workflow draft with YAML content. Usage:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"content": "name: my-workflow\nentry: [agent]\nnodes:\n  - id: agent\n    type: call_llm"
}
The content must be valid workflow YAML with at minimum:
  • name: Workflow name
  • entry: List of entry point node IDs
  • nodes: Array of node definitions
  • edges: Array of edge definitions (optional for single-node workflows)
Parameters:
  • id: (required) Workflow draft UUID.
  • name: (optional) Overrides the name in YAML. Used for display name.
  • content: (required) Complete workflow YAML content.
  • expected_version: (optional) Version number for conflict detection.
Response: Returns JSON with id, name, slug, and created (false for updates). The slug can be used in ref: fields to reference this workflow.

System & Execution

Tools for executing shell commands and managing system processes.
ToolTagsDescription
bashexecution, shell, defaultExecute bash commands for building, testing, and system operations in a stateless shell.
bash_killexecution, defaultTerminates a background process in the current workspace.

bash

Tags: execution, shell, default Execute bash commands for building, testing, and system operations in a stateless shell. Uses bash -c to execute commands on Unix/macOS/Linux.

WHEN NOT TO USE THIS TOOL

  • File editing → Use Edit/Write tools
  • File reading → Use View tool
  • Searching files → Use Grep/Glob tools
  • Directory listing → Use LS tool

Output Processing

  • Default output limit is 16000 bytes (use max_output to customize)
  • Use tail_lines to get only the last N lines of output
  • Output metadata includes truncation info and original size
Usage notes:
  • The command argument is required.
  • You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 60 seconds.
  • Use ‘run_in_background: true’ to run long-running commands in the background. You can then use BashOutput to check output, BashKill to terminate, and BashList to see all running processes.
  • VERY IMPORTANT: You MUST avoid using shell in favor of other tools whenever possible, ie: commands like ‘find’ and ‘grep’. Instead use Grep, Glob, or Agent tools to search. You MUST avoid read tools like ‘cat’, ‘head’, ‘tail’, and ‘ls’, and use FileRead and LS tools to read files.
  • VERY IMPORTANT: YOU MUST AVOID WRITING FILES USING SHELL. Please use the appropriate edit and create tools.
  • When issuing multiple commands, use the ’;’ or ’&&’ operator to separate them. DO NOT use newlines (newlines are ok in quoted strings).
STATELESS EXECUTION:
  • IMPORTANT: Each command runs in a fresh, stateless shell. Environment variables and directory changes from previous commands do NOT persist.
  • IMPORTANT: The current working directory is ALWAYS automatically set to the current worktree. There is NO need to cd to the worktree before running commands - just run them directly.
  • To change to a subdirectory within the worktree, use ‘cd’ as part of a compound command (e.g., ‘cd subdir && npm test’).
  • Environment variables set in prior shell sessions will NOT be included. Use the ‘env’ parameter to set environment variables for a specific command execution.
  • If you need to maintain state across commands (e.g., activating a virtual environment), combine commands with && or ; operators.
  • Background processes run in separate shell instances and also start fresh without inherited state.
pytest /foo/bar/tests
cd /foo/bar && pytest tests
Important:
  • Return an empty response - the user will see the output directly
  • Never update git config

bash_kill

Tags: execution, default Terminates a background process in the current workspace. WORKSPACE SCOPING:
  • Can kill any process in the current workspace, regardless of which chat started it
  • Multiple chats in the same workspace share process visibility
  • This enables coordination: one chat can stop a server started by another
This tool sends a termination signal and gives the process time to clean up. If it doesn’t stop gracefully, it will be forcefully killed. Usage notes:
  • Process IDs are provided when you start a background process with run_in_background: true
  • You can only kill processes that are currently running
  • After killing a process, its output is still available via BashOutput
  • Use BashList to see all running background processes in the workspace
Example:
  1. Start a background process: bash(command=“npm run dev”, run_in_background=true)
  2. Kill the process: bash_kill(process_id=“<id-from-step-1>”)

Other Tools

Miscellaneous tools and utilities.
ToolTagsDescription
metadata_writer-Writes and updates project metadata YAML file
worktreedefaultManage git worktrees for parallel development workflows.

metadata_writer

Writes and updates project metadata YAML file

worktree

Tags: default Manage git worktrees for parallel development workflows. WHEN TO USE:
  • Creating isolated development environments for features/bugs
  • Setting up parallel workspaces for agents
  • Managing multiple concurrent work streams ACTIONS:
  1. create - Create a new git worktree Required: name Optional: branch, base_branch, copy_files, force, session_id
  2. list - List all worktrees No parameters required
  3. get - Get details of a specific worktree Required: name
  4. delete - Delete a worktree Required: name
WORKTREE DATA STORAGE:
  • Worktree information is automatically stored in CEL context as ‘worktree_data’
  • Available fields: id, name, path, branch, base_branch, repo_id
  • Use in subsequent steps: worktree_data.path, worktree_data.branch, etc.
FILE COPYING:
  • copy_files: Searches recursively for matching files (e.g., “.env” finds all .env files in any directory)
  • Directory structure is preserved (frontend/.env -> worktree/frontend/.env)
EXAMPLES: Create a worktree with recursive file copy:
{
"action": "create",
"name": "feature-auth",
"base_branch": "main",
"copy_files": [".env", ".env.local"]
}
List all worktrees:
{
"action": "list"
}
NOTES:
  • Worktree paths are stored in ~/.reliant/worktrees/<repo_id>/<name>
  • Each worktree gets its own branch and working directory
  • Use force=true to recreate existing worktrees
  • Worktree data is stored globally for cleanup tracking