Tool Tags
Tools are organized by tags for filtering:| Tag | Description |
|---|---|
tag:readonly | Read-only tools (safe for planning mode) |
tag:file | File operations |
tag:search | Search operations |
tag:execution | Command execution |
tag:web | Web operations |
tag:planning | Planning and task management tools |
tag:analysis | Analysis tools |
tag:workflow | Workflow builder tools |
tag:mcp | All MCP tools |
tag:default | Default toolset (commonly used tools) |
Categories
- Planning & Task Management (20 tools)
- File Operations (4 tools)
- Workflow Management (16 tools)
- System & Execution (2 tools)
- Other Tools (2 tools)
Planning & Task Management
Tools for creating plans, managing tasks, and tracking progress.| Tool | Tags | Description |
|---|---|---|
add_dependency | planning, plan, default | Create a dependency between two tasks in the current plan. |
add_task | planning, plan, default | Add a new task to the current plan. This is your primary tool for dynamic planning and sub-planning. |
bash_list | execution, readonly, plan, default | Lists background processes in the current workspace. |
bash_output | execution, readonly, plan, default | Retrieves output from a background process with pagination and regex filtering support. |
create_plan | planning, plan, default | Create a comprehensive plan with tasks for implementing a feature or solving a problem. |
create_subtask | planning, plan | Create a subtask under an existing task. |
fetch | web, readonly, plan, default | Fetches content from a URL and returns it in the specified format. |
get_plan | planning, readonly, plan, default | Retrieve the current plan for this session. |
glob | search, readonly, plan, default | - Fast file pattern matching tool that works with any codebase size |
grep | search, readonly, plan, default | A powerful search tool built on ripgrep. |
layout_library | readonly, plan | Layout library tool that provides pre-built, accessible, and responsive HTML/CSS layout templates… |
list_ready_tasks | planning, readonly, plan, default | List tasks that are ready to work on — no unresolved blockers. |
list_tasks | planning, readonly, plan, default | List all tasks for the current plan. |
project_analyzer | analysis, readonly, plan | Analyzes project structure, detects languages, build systems, and test frameworks |
remove_dependency | planning, plan, default | Remove a dependency between two tasks. |
sourcegraph | analysis, readonly, plan | Search code across public repositories using Sourcegraph’s GraphQL API. |
update_plan | planning, plan, default | Update an existing plan’s details or status. |
update_task | planning, plan, default | Update a task’s status, details, or metadata. |
view | file, readonly, plan, default | File viewing tool that reads and displays the contents of files with line numbers, allowing you t… |
websearch | web, readonly, plan, default | Search the web using DuckDuckGo’s HTML search. |
add_dependency
Tags:planning, plan, default
Create a dependency between two tasks in the current plan.
DEPENDENCY TYPES:
- blocks: from_task must complete before to_task can start
- related: informational link, no execution constraint
- parallel_with: explicitly marks tasks as parallelizable
- Task A blocks Task B: add_dependency(from_task=“A-id”, to_task=“B-id”, type=“blocks”) Means B cannot start until A completes.
- Tasks can run together: add_dependency(from_task=“A-id”, to_task=“B-id”, type=“parallel_with”) Explicitly marks A and B as safe to run in parallel.
- Informational link: add_dependency(from_task=“A-id”, to_task=“B-id”, type=“related”) No execution constraint, just documents a relationship.
add_task
Tags:planning, plan, default
Add a new task to the current plan. This is your primary tool for dynamic planning and sub-planning.
WHEN TO USE:
- When you discover additional work that needs to be done
- When you encounter missing dependencies or prerequisites
- When breaking down complex work into more steps
- When pivoting approach requires new tasks
- Use parent_id to create subtasks under an existing task when you discover complexity
- Break down tasks that prove more complex than initially planned
- Create hierarchical task structures for better organization
- Each subtask inherits context from parent but can have specialized metadata
- Implementation agent finds task needs research → add subtask with preferred_agent: “research”
- Research agent discovers multiple integration points → add subtasks for each integration
- Any agent finds unfamiliar tech/patterns → add subtask with tool_hints: [“search_first”, “use_subagent”]
- Task requires multiple phases → add sequential subtasks with position ordering
- preferred_agent: Which agent should handle this (planning/research/implementation/debugging/tdd/finalize)
- tool_hints: Suggested tools to use [“use_bash”, “use_subagent”, “search_first”, “test_first”]
- dependencies: What this task depends on (packages, files, other tasks)
- notes: Important context or discoveries
- priority: high/medium/low
- Complex Implementation Discovery: parent_id: “task_123”, title: “Research authentication patterns”, preferred_agent: “research”, notes: “Found unfamiliar OAuth flow”
- Multi-Step Breakdown: parent_id: “task_456”, title: “Setup database schema”, position: 1 parent_id: “task_456”, title: “Create migration scripts”, position: 2
- Cross-Agent Coordination: parent_id: “task_789”, title: “Write integration tests”, preferred_agent: “tdd”, dependencies: [“API endpoints complete”]
- Add tasks as soon as you discover they’re needed
- Use parent_id when expanding existing tasks that prove complex
- Include metadata hints for better execution
- Position subtasks logically in sequence
- Use descriptive titles and comprehensive descriptions
- Create subtasks for different agent specializations when needed
bash_list
Tags:execution, readonly, plan, default
Lists background processes in the current workspace.
WORKSPACE SCOPING:
- Processes are scoped to the current workspace (worktree)
- Multiple chats in the same workspace share the same process list
- This enables coordination: one chat can start a server, another can check its status
- Use BashOutput to view output and BashKill to terminate any workspace process
- By default, shows only running processes in the current workspace
- Use ‘all: true’ to include completed, failed, and killed processes
- Process IDs can be used with BashOutput and BashKill tools
- Running processes: Shows ID, command, and how long they’ve been running
- Completed processes: Shows ID, command, exit code, and duration
- Failed processes: Shows ID, command, exit code, and error indication
- List running processes: bash_list()
- List all processes including completed: bash_list(all=true)
bash_output
Tags:execution, readonly, plan, default
Retrieves output from a background process with pagination and regex filtering support.
WORKSPACE SCOPING:
- Can read output from any process in the current workspace, regardless of which chat started it
- Multiple chats in the same workspace share process visibility
- This enables monitoring: check on servers or builds started by other chats
- Process IDs are provided when you start a background process with run_in_background: true
- The tool will indicate if the process is still running or has completed
- If the process has completed, the exit code will be provided
- Output is not cleared after reading - you can re-read from any position
-
Standard Pagination (default):
- offset: Start reading from byte N (default: 0)
- limit: Read up to N bytes (default: 16000)
- Can be combined: offset + limit
-
Tail Mode:
- tail: Get last N lines
- Cannot be combined with: regex, offset, limit
-
Regex Filter Mode:
- regex: Filter output to lines matching pattern
- When set, tool filters FIRST, then applies offset/limit to filtered results
- Can be combined with: offset, limit, regex_case_insensitive, regex_context_before, regex_context_after
- Cannot be combined with: tail
- Optional parameters:
- regex_case_insensitive: Case-insensitive matching
- regex_context_before: Include N lines before match (like grep -B)
- regex_context_after: Include N lines after match (like grep -A)
- offset + limit (standard pagination)
- tail (alone)
- regex (alone)
- regex + offset + limit (filtered pagination)
- regex + regex_case_insensitive + regex_context_before + regex_context_after
- tail + regex
- tail + offset
- tail + limit
- regex_case_insensitive without regex
- regex_context_before/after without regex
- Start a background process: bash(command=“npm run dev”, run_in_background=true)
-
Get first chunk:
bash_output(process_id=“
<id>”) -
Get next chunk:
bash_output(process_id=“
<id>”, offset=16000) -
Get last 100 lines:
bash_output(process_id=“
<id>”, tail=100) -
Filter for errors:
bash_output(process_id=“
<id>”, regex=“ERROR|FATAL”) -
Filter with context:
bash_output(process_id=“
<id>”, regex=“ERROR”, regex_context_after=3) -
Filter and paginate:
bash_output(process_id=“
<id>”, regex=“WARN”, offset=0, limit=10000)
- has_more: true if more output is available
- next_offset: where to start reading for the next chunk
- total_available: total bytes available in the (filtered or original) output
- filter_applied: true if regex was used
- total_matches: number of matching lines (when filtered)
- matches_in_response: number of matches in this chunk
create_plan
Tags:planning, plan, default
Create a comprehensive plan with tasks for implementing a feature or solving a problem.
WHEN TO USE:
- AFTER you preform your initial research and analyze the problem.
- Use this tool when you need to organize complex work into structured steps
- Only available in the ‘research’ state
- You typically should create plans AFTER your findings. Avoid creating tasks to research, explore, identify, or search through the codebase. You should first perform your research so you can create an informed plan.
- Title: Clear, concise title for the plan
- Description: Detailed description including:
- Main objective
- Approach/strategy
- Alternative approaches (if applicable)
- Success criteria
- Complexity: simple|moderate|complex
- Tasks: List of tasks with title, description, optional metadata, and optional dependencies
- The plan will be associated with the current session
- task_position: The 1-indexed position of another task in the tasks array
- type: “blocks” (must complete first), “related” (informational), or “parallel_with” (safe to run together)
[{task_position: 1, type: "blocks"}], it means task 1 blocks task 3.
Example with dependencies:
- preferred_agent: Which agent should handle this task
- tool_hints: Suggested tools to use
- dependencies: Informational dependency notes (free-form text)
- notes: Important context
- priority: high/medium/low
- Break down work into clear, actionable tasks
- Order tasks logically
- Use inline dependencies to define the task graph upfront
- Include a mini-roadmap in the description
- Document alternative approaches for pivoting
- Be specific about what needs to be done
- Consider edge cases and potential blockers
- Consider changing state in parallel with plan creation, if states are available.
create_subtask
Tags:planning, plan
Create a subtask under an existing task.
WHEN TO USE:
- When breaking down a complex task into smaller steps
- To add more granular tracking
- When discovering additional work while implementing
- Keep subtasks focused and specific
- Use subtasks for logical groupings of work
- Don’t create too many levels of nesting
fetch
Tags:web, readonly, plan, default
Fetches content from a URL and returns it in the specified format.
Uses Mozilla Readability to automatically extract main page content, stripping navigation,
footers, ads, and other chrome. Returns only the readable content for text and markdown formats.
WHEN TO USE THIS TOOL:
- Use when you need to download content from a URL
- Helpful for retrieving documentation, API responses, or web content
- Useful for getting external information to assist with tasks
- Provide the URL to fetch content from
- Specify the desired output format (text, markdown, or html)
- Optionally set a timeout for the request
- Automatic content extraction using Mozilla Readability (strips nav, ads, footers)
- Supports three output formats: text, markdown, and html
- Automatically handles HTTP redirects
- Detects likely JavaScript-rendered pages and warns you
- Sets reasonable timeouts to prevent hanging
- max_size: Maximum bytes to fetch (default: 16000, ~16KB) Prevents downloading huge files that could overwhelm context
- Cannot render JavaScript. Single-page apps (SPAs) will return little or no content. The response metadata will include possible_js_rendered=true when this is detected. For JS-heavy sites, consider using browser tools instead.
- Default maximum response size is 16KB (use max_size to adjust)
- Only supports HTTP and HTTPS protocols
- Cannot handle authentication or cookies
- Some websites may block automated requests
- For GitHub repos, use raw.githubusercontent.com URLs instead of github.com (e.g., https://raw.githubusercontent.com/org/repo/main/README.md)
- For API docs that are JS-rendered SPAs, look for the OpenAPI/Swagger JSON spec URL instead
- Use text or markdown format for documentation (html returns raw markup with all chrome)
- If the response says possible_js_rendered=true, the page needs JavaScript to render. Try finding an alternative URL, a raw content source, or use browser tools.
- Adjust max_size for larger documents (but consider context limits)
- content_length: Size of the extracted content
- raw_html_size: Size of the original HTML before extraction (for HTML pages)
- truncated: Whether content was truncated to fit max_size
- encoding_used: The format that was applied
- page_title: Page title extracted by Readability (when available)
- possible_js_rendered: True if the page appears to be JavaScript-rendered (very little content extracted)
- used_readability: True if Readability content extraction was applied
get_plan
Tags:planning, readonly, plan, default
Retrieve the current plan for this session.
WHEN TO USE:
- When you need to review the current plan
- To check plan status and progress
- To understand what needs to be done
- Plan details including title, description, status, and complexity
- Returns error if no plan exists for the session
glob
Tags:search, readonly, plan, default
- Fast file pattern matching tool that works with any codebase size
- Supports glob patterns like ”/*.js” or “src//*.ts”
- Returns matching file paths sorted by modification time
- Use this tool when you need to find files by name patterns
- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Agent tool instead
- You have the capability to call multiple tools in a single response. It is always better to speculatively perform multiple searches as a batch that are potentially useful.
- Dependencies: node_modules, vendor, bower_components, jspm_packages
- Build outputs: dist, build, target, bin, obj, out, generated
- Cache/temp: pycache, coverage, tmp, temp, logs
- Internal: .git, .reliant
grep
Tags:search, readonly, plan, default
A powerful search tool built on ripgrep.
ALWAYS use Grep for search tasks. NEVER use grep or rg via Bash.
OUTPUT MODES:
- “files_with_matches” (default): Returns file paths sorted by modification time
- “content”: Shows matching lines with line numbers
- “count”: Shows match counts per file
- pattern: Regex pattern (use fixed_strings=true for literal matching)
- glob: Filter files (e.g., “.js”, ”**/.tsx”)
- type: File type filter (e.g., “js”, “ts”, “py”, “go”)
- word_boundary: Match whole words only (e.g., “foo” won’t match “foobar”)
- fixed_strings: Treat pattern as literal text, no regex escaping needed
- head_limit: Limit number of results
- -C/-A/-B: Context lines (content mode only)
- include_ignored: Search commonly ignored directories
- Dependencies: node_modules, vendor, bower_components, jspm_packages
- Build outputs: dist, build, target, bin, obj, out, generated
- Cache/temp: pycache, coverage, tmp, temp, logs
- Internal: .git, .reliant
- Alternation: | (NOT |)
- Grouping: () (NOT ( ))
- Quantifiers: +, ?,
{n,m}work without escaping - Character classes: [a-z], \d, \w, \s work as expected
- WRONG: pattern=“foo|bar” — | matches a literal backslash followed by pipe, NOT alternation RIGHT: pattern=“foo|bar” — | is alternation in ripgrep
- WRONG: pattern=“(group)” — ( matches a literal parenthesis RIGHT: pattern=“(group)” — () is grouping in ripgrep
- WRONG: pattern=“foo+” — + matches a literal plus RIGHT: pattern=“foo+” — + means one-or-more in ripgrep
- To match literal special characters, use fixed_strings=true instead of backslash escaping
- Find function definitions: pattern=“func\s+\w+”, type=“go”
- Find exact text with special chars: pattern=“
interface{}”, fixed_strings=true - Find whole word: pattern=“Error”, word_boundary=true
- Multiline patterns: pattern=“struct
{[\s\S]*?}”, multiline=true - Search in node_modules: pattern=“lodash”, include_ignored=true
- Alternation (match any of several): pattern=“foo|bar|baz”
- Grouped alternation: pattern=“(get|set)Value”
layout_library
Tags:readonly, plan
Layout library tool that provides pre-built, accessible, and responsive HTML/CSS layout templates for UX design.
WHEN TO USE:
- Creating new UI layouts quickly
- Getting responsive, accessible layout templates
- Prototyping user interfaces
- Establishing consistent layout patterns
- action: “list” - Get all available layouts with descriptions
- action: “get” with layout name - Retrieve specific layout HTML/CSS
- 10 pre-built responsive layouts
- Accessibility-first design
- Mobile-responsive
- Semantic HTML structure
list_ready_tasks
Tags:planning, readonly, plan, default
List tasks that are ready to work on — no unresolved blockers.
A task is “ready” when:
- Its status is “pending” (not started yet)
- All tasks that block it (via ‘blocks’ dependencies) have status “completed”
- List of ready tasks with their details
- Total count of ready tasks vs total pending
list_tasks
Tags:planning, readonly, plan, default
List all tasks for the current plan.
WHEN TO USE:
- When you need to see all tasks in the plan
- To check task progress and status
- To understand what work needs to be done
- List of all tasks with their status, title, and hierarchy
- Tasks are ordered by position and show parent-child relationships
project_analyzer
Tags:analysis, readonly, plan
Analyzes project structure, detects languages, build systems, and test frameworks
remove_dependency
Tags:planning, plan, default
Remove a dependency between two tasks.
Specify from_task, to_task, and type to identify which dependency to remove.
sourcegraph
Tags:analysis, readonly, plan
Search code across public repositories using Sourcegraph’s GraphQL API.
WHEN TO USE THIS TOOL:
- Use when you need to find code examples or implementations across public repositories
- Helpful for researching how others have solved similar problems
- Useful for discovering patterns and best practices in open source code
- Provide a search query using Sourcegraph’s query syntax
- Optionally specify the number of results to return (default: 10)
- Optionally set a timeout for the request
- Basic search: “fmt.Println” searches for exact matches
- File filters: “file:.go fmt.Println” limits to Go files
- Repository filters: “repo:^github.com/golang/go$ fmt.Println” limits to specific repos
- Language filters: “lang:go fmt.Println” limits to Go code
- Boolean operators: “fmt.Println AND log.Fatal” for combined terms
- Regular expressions: “fmt.(Print|Printf|Println)” for pattern matching
- Quoted strings: ""exact phrase"" for exact phrase matching
- Exclude filters: “-file:test” or “-repo:forks” to exclude matches
-
Repository filters:
- “repo:name” - Match repositories with name containing “name”
- “repo:^github.com/org/repo$” - Exact repository match
- “repo:org/repo@branch” - Search specific branch
- “repo:org/repo rev:branch” - Alternative branch syntax
- “-repo:name” - Exclude repositories
- “fork:yes” or “fork:only” - Include or only show forks
- “archived:yes” or “archived:only” - Include or only show archived repos
- “visibility:public” or “visibility:private” - Filter by visibility
-
File filters:
- “file:.js$” - Files with .js extension
- “file:internal/” - Files in internal directory
- “-file:test” - Exclude test files
- “file:has.content(Copyright)” - Files containing “Copyright”
- “file:has.contributor([email protected])” - Files with specific contributor
-
Content filters:
- “content:“exact string"" - Search for exact string
- “-content:“unwanted"" - Exclude files with unwanted content
- “case:yes” - Case-sensitive search
-
Type filters:
- “type:symbol” - Search for symbols (functions, classes, etc.)
- “type:file” - Search file content only
- “type:path” - Search filenames only
- “type:diff” - Search code changes
- “type:commit” - Search commit messages
-
Commit/diff search:
- “after:“1 month ago"" - Commits after date
- “before:“2023-01-01"" - Commits before date
- “author:name” - Commits by author
- “message:“fix bug"" - Commits with message
-
Result selection:
- “select:repo” - Show only repository names
- “select:file” - Show only file paths
- “select:content” - Show only matching content
- “select:symbol” - Show only matching symbols
-
Result control:
- “count:100” - Return up to 100 results
- “count:all” - Return all results
- “timeout:30s” - Set search timeout
- “file:.go context.WithTimeout” - Find Go code using context.WithTimeout
- “lang:typescript useState type:symbol” - Find TypeScript React useState hooks
- “repo:^github.com/kubernetes/kubernetes$ pod list type:file” - Find Kubernetes files related to pod listing
- “repo:sourcegraph/sourcegraph$ after:“3 months ago” type:diff database” - Recent changes to database code
- “file:Dockerfile (alpine OR ubuntu) -content:alpine:latest” - Dockerfiles with specific base images
- “repo:has.path(.py) file:requirements.txt tensorflow” - Python projects using TensorFlow
- “term1 AND term2” - Results containing both terms
- “term1 OR term2” - Results containing either term
- “term1 NOT term2” - Results with term1 but not term2
- “term1 and (term2 or term3)” - Grouping with parentheses
- Only searches public repositories
- Rate limits may apply
- Complex queries may take longer to execute
- Maximum of 20 results per query
- Use specific file extensions to narrow results
- Add repo: filters for more targeted searches
- Use type:symbol to find function/method definitions
- Use type:file to find relevant files
update_plan
Tags:planning, plan, default
Update an existing plan’s details or status.
WHEN TO USE:
- When you need to modify the plan based on new information
- When pivoting to a different approach
- When marking a plan as completed or cancelled
- Title: Update the plan title
- Description: Add new information, document pivots
- Status: pending|in_progress|completed|cancelled
- Complexity: simple|moderate|complex
- Document why changes are being made
- Keep the description updated with current approach
- Use this to track progress and pivots
update_task
Tags:planning, plan, default
Update a task’s status, details, or metadata.
WHEN TO USE:
- When starting work on a task (mark as in_progress)
- When completing a task (mark as completed)
- When a task is blocked or failed
- To update task description with findings
- To add notes, hints, or discoveries to metadata
- To claim a task by setting assignee + in_progress
- pending: Not started yet
- in_progress: Currently working on it
- completed: Successfully finished
- failed: Could not complete
- blocked: Waiting on something (add blocker to notes)
- skipped: Decided not to do
- cancelled: No longer needed
- Free-form text identifying who is working on this task
- Use a descriptive label: spawn title, role name, or agent identifier
- Claim pattern: update_task(task_id=“X”, status=“in_progress”, assignee=“researcher-auth”)
- Other agents see assignments in list_tasks and skip claimed work
- notes: Add discoveries, blockers, or important context
- preferred_agent: Suggest which agent should handle this
- tool_hints: Suggest tools to use [“use_bash”, “search_first”]
- dependencies: Document what this depends on
- next_steps: What to do after this task
- Update status when you start and finish tasks
- Add descriptions to document what was done
- Use metadata.notes for blockers when marking as blocked
- Add tool_hints for complex tasks to guide future execution
- Set assignee when claiming a task to prevent duplicate work
view
Tags:file, readonly, plan, default
File viewing tool that reads and displays the contents of files with line numbers, allowing you to examine code, logs, or text data.
WHEN TO USE:
- Reading contents of specific files (source code, configs, logs)
- Examining text-based file formats
- Provide the file path
- Optional: offset (starting line) and limit (number of lines)
- Issue multiple view tools in a single request for improved performance
- Displays file contents with line numbers for easy reference
- Can read from any position in a file using the offset parameter
- Handles large files by limiting the number of lines read
- Automatically truncates very long lines for better display
- Suggests similar file names when the requested file isn’t found
- Maximum output size is 16KB (~4K tokens) - larger files are truncated with head+tail
- Default reading limit is 300 lines
- Lines longer than 500 characters are truncated
- Cannot display binary files or images
- Images can be identified but not displayed
- Use with Glob tool to first find files you want to view
- For code exploration, first use Grep to find relevant files, then View to examine them
- When viewing large files, use the offset parameter to read specific sections
- If output is truncated, use offset to read the middle section
websearch
Tags:web, readonly, plan, default
Search the web using DuckDuckGo’s HTML search.
WHEN TO USE THIS TOOL:
- Finding current information not available in the assistant’s training data
- Researching documentation, tutorials, or examples online
- Looking up error messages or debugging information
- Finding libraries, tools, or frameworks
- Checking current status of services or projects
- Discovering recent developments or news about technologies
- Provide a search query as you would in a web browser
- Optionally specify the number of results (default: 10, max: 20)
- Use simple, natural language queries with specific keywords
- Keep queries short and focused (3-8 words works best)
- Use minus (-) to exclude terms: python tutorial -django
- Quotes work for simple exact phrases: “react hooks” tutorial
- Boolean operators: OR, AND (e.g. “PUT” OR “POST” will FAIL)
- Complex quoted phrase combinations (multiple quoted phrases with operators)
- site: operator is unreliable and often returns no results
- filetype: operator is unreliable If you need boolean-style searches, run multiple simple queries instead.
- “golang context timeout example” - Simple keywords
- “anthropic claude api documentation” - Natural language
- “Customer.io transactional API” - Product + feature
- “react hooks tutorial 2024” - Topic + timeframe
- “PUT” OR “POST” OR “PATCH” email content - Boolean operators don’t work
- site:customer.io/docs/api specific-page - site: is unreliable
- “exact phrase 1” OR “exact phrase 2” - Complex boolean combos fail
- Start with broad queries, then narrow based on results
- If a search returns 0 results, SIMPLIFY the query - don’t add complexity
- After 3-4 searches on the same topic, synthesize what you have rather than keep searching
- For API documentation, search for the official SDK/client library on GitHub instead
- For GitHub content, prefer fetching raw.githubusercontent.com URLs over github.com
- Title: The title of the search result
- Description: A brief description/snippet from the page
- URL: The link to the resource
- Maximum of 20 results per query
- May have rate limits if used excessively
- DuckDuckGo HTML search has limited query syntax (see above)
- Results may be less comprehensive than Google for niche technical queries
- Start with fewer results (5-10) for faster responses
- Use specific queries to get more relevant results
- Combine with the fetch tool to retrieve full page content from results
File Operations
Tools for reading, writing, and modifying files.| Tool | Tags | Description |
|---|---|---|
edit | file, default | Make precise text replacements in files or create new files. All edit operations must be provided… |
find_replace | file, default | Performs find and replace operations across multiple files matching a glob pattern. |
move_code | file, default | Move or copy a block of code from one location to another, within the same file or across files. |
write | file, default | File writing tool that creates or updates files in the filesystem. |
edit
Tags:file, default
Make precise text replacements in files or create new files. All edit operations must be provided in the edits array.
NOTE: You can parallelize multiple edit tool calls in one message, even if they involve the same files.
WHEN TO USE:
- Precise text replacements
- Creating new files (empty old_string)
- Deleting specific content (empty new_string)
- Renaming variables/functions (with replace_all)
- Coordinated changes across multiple files
- Complete file rewrite: Use Write tool
- Moving/renaming files: Use Bash mv command
- Insufficient context in old_string (needs 3-5 lines)
- Forgetting whitespace/indentation must match exactly
- Not checking if text appears multiple times
One Edit
Multiple Edits
Create New File
Delete Content
Rename Variable
CRITICAL REQUIREMENTS
-
UNIQUENESS (when replace_all=false):
- Include 3-5 lines of context BEFORE
- Include 3-5 lines of context AFTER
- Match whitespace/indentation EXACTLY
- VERIFICATION CHECKLIST: Check how many times text appears Include enough context for uniqueness Verify parent directories exist (new files)
-
FAILURE CONDITIONS:
- old_string not found → FAILS
- Multiple matches (without replace_all) → FAILS
- Whitespace mismatch → FAILS
BEST PRACTICES
- Include ample context
- All operations are atomic (all succeed or all fail)
- Use replace_all for systematic renames
- Verify edits don’t break code
WORKS WELL WITH
- AFTER: Bash (test changes)
- ALTERNATIVE: Write (complete rewrite)
PARAMETERS
- edits: Array of edit operations (required)
- file_path: Absolute path (required)
- old_string: Text to find (exact match)
- new_string: Replacement text
- replace_all: Replace all occurrences (optional)
find_replace
Tags:file, default
Performs find and replace operations across multiple files matching a glob pattern.
WHEN TO USE:
- Renaming variables, functions, or classes across multiple files
- Updating imports or module references project-wide
- Fixing consistent typos or naming conventions
- Batch updating configuration values
- Refactoring patterns across the codebase WHEN NOT TO USE:
- Single file edits: Use Edit tool instead
- Complex structural changes: Use Patch tool
- Context-dependent replacements: Use Edit or Patch for precision FEATURES:
- Glob pattern file filtering (e.g., ”**/*.js”, “
src/**/*.{ts,tsx}”) - Regular expression support with capture groups
- Case-insensitive matching option
- Preview mode to see changes before applying
- Automatic file history tracking USAGE PATTERNS:
Preview First (Recommended)
Use preview=true to see what would change before committing. Preview mode does not require permission, making it ideal for scoping changes. find_pattern: “oldFunction” replace_text: “newFunction” file_glob: ”**/*.js” preview: trueSimple Text Replacement
find_pattern: “oldFunction” replace_text: “newFunction” file_glob: ”**/*.js”Regex with Capture Groups
find_pattern: “import (.) from ‘old-module’” replace_text: “import $1 from ‘new-module’” use_regex: true file_glob: ”**/.ts”Case-Insensitive Replacement
find_pattern: “TODO” replace_text: “FIXME” ignore_case: true file_glob: “**/*.{js,ts,jsx,tsx}”
CRITICAL REQUIREMENTS
- PREVIEW FIRST:
- ALWAYS call with preview=true first to verify the pattern matches and scope
- Preview shows diffs of what would change without modifying any files
- After reviewing the preview, call again without preview to apply
- FILE READING:
- Checks modification times to prevent conflicts
- Validates file access permissions
- PATTERN MATCHING:
- Literal text matching by default
- Regex patterns with use_regex=true
- Case sensitivity controlled by ignore_case
- SAFETY CHECKS:
- Atomic operation (all or nothing)
- Preserves file history
BEST PRACTICES
- ALWAYS use preview=true first to see what changes would be made
- Use specific file globs to limit scope
- Review the preview diffs carefully before applying
- Test regex patterns with preview before committing
WORKS WELL WITH
- BEFORE: preview=true (see changes first)
- BEFORE: Grep (find occurrences)
- BEFORE: View (read files)
- AFTER: Bash (run tests)
- ALTERNATIVE: Edit (single file)
- ALTERNATIVE: Patch (complex multi-file edits)
PARAMETERS
- find_pattern: Text or regex pattern to find (required)
- replace_text: Replacement text (required)
- file_glob: File filter pattern (optional, defaults to all files)
- ignore_case: Case-insensitive matching (optional, default false)
- use_regex: Treat pattern as regex (optional, default false)
- preview: Preview mode - show what would change without applying (optional, default false) Remember: Use preview=true first, then apply.
move_code
Tags:file, default
Move or copy a block of code from one location to another, within the same file or across files.
WHEN TO USE:
- Reorganizing code within a file
- Moving a function/method to a different file
- Extracting code into a new location
- Copying code snippets between files
- Reordering functions in a file
- For simple cut/paste of small text - use edit tool
- For renaming across files - use find_replace tool
- For complex multi-file refactors - consider using the refactor agent
- Extracts lines source_start to source_end from source_file
- Inserts the extracted code AFTER target_line in target_file
- If operation is “move” (default), deletes the original lines from source
- If operation is “copy”, keeps the original lines
Move function to end of file
Copy code block to another file
Insert at beginning of file
- Line numbers are 1-indexed
- source_start and source_end are INCLUSIVE
- target_line = 0 inserts at the very beginning
- For same-file moves, the tool handles line number shifts automatically
- For same-file moves, be aware that line numbers shift after the operation
- Add blank lines in the extracted code if needed for proper spacing
- Check for any imports/dependencies that might need to be added to target file
write
Tags:file, default
File writing tool that creates or updates files in the filesystem.
WHEN TO USE:
- Creating new files or updating existing files
- Saving generated code, configurations, or text data
- Provide the file path and content to write
- Parent directories are created automatically
- Creates new files or overwrites existing ones
- Auto-creates parent directories
- Checks for external modifications for safety
- Avoids unnecessary writes when content unchanged
- Cannot append (rewrites entire file)
- Existing files must be read first (View tool) since Write replaces the entire file
- Use the LS tool to verify the correct location when creating new files
- Combine with Glob and Grep tools to find and modify multiple files
- Always include descriptive comments when making changes to existing code
Workflow Management
Tools for managing and inspecting workflows, presets, and scenarios.| Tool | Tags | Description |
|---|---|---|
create_workflow | workflow | Create a new workflow draft. |
delete_scenario | workflow | Delete a test scenario. |
edit_scenario | workflow | Make precise text replacements in a scenario’s YAML definition. |
edit_workflow | workflow | Make precise text replacements in the workflow YAML. |
get_cel_reference | workflow, readonly | Gets the CEL expression reference for workflow development. |
get_preset | workflow, readonly | Gets the full configuration of a preset. |
get_schema | workflow, readonly | Look up schema documentation for any workflow type by name. |
get_workflow | workflow, readonly | Gets the full YAML definition of a workflow draft. |
get_workflow_suggestions | workflow, readonly | Returns static design suggestions for building workflows. |
list_presets | workflow, readonly | Lists available presets for agent nodes. |
list_scenarios | workflow, readonly | List all test scenarios for the current workflow. |
list_workflows | workflow, readonly | Lists all available workflows (builtin, project, and user-created). |
run_scenario | workflow | Run an existing test scenario by name. |
view_scenario | workflow, readonly | View a specific test scenario’s full definition. |
write_scenario | workflow | Create or update a test scenario with YAML content. |
write_workflow | workflow | Replace an existing workflow draft with YAML content. |
create_workflow
Tags:workflow
Create a new workflow draft.
Returns the draft UUID which you can then use with get_workflow, edit_workflow, and write_workflow.
Parameters:
- name: (optional) Workflow name. A random name is generated if omitted.
- content: (optional) Complete workflow YAML. The default agent template is used if omitted.
delete_scenario
Tags:workflow
Delete a test scenario.
Permanently removes the scenario from the workflow.
edit_scenario
Tags:workflow
Make precise text replacements in a scenario’s YAML definition.
Use this for small changes like updating expected values or modifying events.
The old_string must match exactly (including whitespace and indentation).
Example:
edit_workflow
Tags:workflow
Make precise text replacements in the workflow YAML.
Use this for small changes like:
- Adding or modifying a node
- Updating an edge condition
- Changing input parameters
get_cel_reference
Tags:workflow, readonly
Gets the CEL expression reference for workflow development.
WHEN TO USE:
- When writing conditions, args, or dynamic values in workflows
- To understand available namespaces and their fields
- To see available custom functions
- All namespaces (inputs, workflow, nodes, iter, output, outputs)
- Field documentation for each namespace
- Custom functions (parseJson, coalesce, etc.)
- Common patterns and examples
get_preset
Tags:workflow, readonly
Gets the full configuration of a preset.
WHEN TO USE:
- To view preset configurations
- To understand what a preset provides
- To copy and adapt preset settings
- name: The preset name (from list_presets)
get_schema
Tags:workflow, readonly
Look up schema documentation for any workflow type by name.
WHEN TO USE:
- When you see a field type like “thread: ThreadConfig” and need details
- When you need to understand node output structure (e.g., CallLLMOutput)
- To explore top-level types (Workflow, Edge)
- To get full field documentation for any type
- Node types: call_llm, loop, workflow, run, execute_tools, join, etc.
- Input types: string, number, boolean, enum, model, message, etc.
- Config types: ThreadConfig, SaveMessageConfig, ProjectConfig, ResponseTool, etc.
- Output types: CallLLMOutput, ExecuteToolsOutput, RunOutput, LoopOutput, etc.
- Top-level: Workflow, Edge, EdgeCase
- get_schema(name=“call_llm”) // Node type
- get_schema(name=“ThreadConfig”) // Config type
- get_schema(name=“CallLLMOutput”) // Output structure
- get_schema(name=“Workflow”) // Top-level workflow structure
- get_schema(name=“Edge”) // Edge routing structure
get_workflow
Tags:workflow, readonly
Gets the full YAML definition of a workflow draft.
WHEN TO USE:
- To view the current state of the workflow you’re editing
- Before making edits to understand the structure
- id: (required) Workflow draft UUID
get_workflow_suggestions
Tags:workflow, readonly
Returns static design suggestions for building workflows.
WHEN TO USE:
- Before starting a new workflow design
- When encountering complexity or unexpected behavior
- To learn best practices for edges, joins, loops, and conditions
- Structure and organization
- Edge routing patterns
- Node vs edge conditions
- Join behavior with conditional sources
- Loop patterns and outputs
- Testing strategies
list_presets
Tags:workflow, readonly
Lists available presets for agent nodes.
WHEN TO USE:
- To discover available presets for agent configurations
- To find the right preset for a specific task
- Before using get_preset to view details
list_scenarios
Tags:workflow, readonly
List all test scenarios for the current workflow.
Returns a summary of each scenario including name, description, and last run status.
Use this to see what scenarios exist and their current state.
No parameters needed - the workflow is determined from the current chat context.
list_workflows
Tags:workflow, readonly
Lists all available workflows (builtin, project, and user-created).
WHEN TO USE:
- To discover available workflows
- To find workflow patterns for common use cases
- Before using get_workflow to view details
run_scenario
Tags:workflow
Run an existing test scenario by name.
Executes the scenario against the current workflow and returns the results.
Use this after making changes to verify scenarios still pass.
Use list_scenarios to see available scenario names.
view_scenario
Tags:workflow, readonly
View a specific test scenario’s full definition.
Returns the complete scenario YAML including events, expectations, and last run results.
Use this to examine a scenario’s configuration or debug test failures.
write_scenario
Tags:workflow
Create or update a test scenario with YAML content.
Creates or updates a scenario with the given YAML definition and runs it.
Scenario YAML structure:
name: scenario_name
description: What this scenario tests
events:
- node: node_id # Optional: target specific node output: # Mock output for the node message: role: assistant text: “Hello!” response_text: “Hello!” expect: outcome: completed # or “error” reached: [“node1”, “node2”] not_reached: [“node3”]
- Top-level nodes: node: “call_llm”
- Inner loop nodes: node: “agent_loop.call_llm” (dot-separated)
- Nested loops: node: “outer_loop.inner_loop.call_llm”
write_workflow
Tags:workflow
Replace an existing workflow draft with YAML content.
Usage:
- name: Workflow name
- entry: List of entry point node IDs
- nodes: Array of node definitions
- edges: Array of edge definitions (optional for single-node workflows)
- id: (required) Workflow draft UUID.
- name: (optional) Overrides the name in YAML. Used for display name.
- content: (required) Complete workflow YAML content.
- expected_version: (optional) Version number for conflict detection.
System & Execution
Tools for executing shell commands and managing system processes.bash
Tags:execution, shell, default
Execute bash commands for building, testing, and system operations in a stateless shell.
Uses bash -c to execute commands on Unix/macOS/Linux.
WHEN NOT TO USE THIS TOOL
- File editing → Use Edit/Write tools
- File reading → Use View tool
- Searching files → Use Grep/Glob tools
- Directory listing → Use LS tool
Output Processing
- Default output limit is 16000 bytes (use max_output to customize)
- Use tail_lines to get only the last N lines of output
- Output metadata includes truncation info and original size
- The command argument is required.
- You can specify an optional timeout in milliseconds (up to 600000ms / 10 minutes). If not specified, commands will timeout after 60 seconds.
- Use ‘run_in_background: true’ to run long-running commands in the background. You can then use BashOutput to check output, BashKill to terminate, and BashList to see all running processes.
- VERY IMPORTANT: You MUST avoid using shell in favor of other tools whenever possible, ie: commands like ‘find’ and ‘grep’. Instead use Grep, Glob, or Agent tools to search. You MUST avoid read tools like ‘cat’, ‘head’, ‘tail’, and ‘ls’, and use FileRead and LS tools to read files.
- VERY IMPORTANT: YOU MUST AVOID WRITING FILES USING SHELL. Please use the appropriate edit and create tools.
- When issuing multiple commands, use the ’;’ or ’&&’ operator to separate them. DO NOT use newlines (newlines are ok in quoted strings).
- IMPORTANT: Each command runs in a fresh, stateless shell. Environment variables and directory changes from previous commands do NOT persist.
- IMPORTANT: The current working directory is ALWAYS automatically set to the current worktree. There is NO need to cd to the worktree before running commands - just run them directly.
- To change to a subdirectory within the worktree, use ‘cd’ as part of a compound command (e.g., ‘cd subdir && npm test’).
- Environment variables set in prior shell sessions will NOT be included. Use the ‘env’ parameter to set environment variables for a specific command execution.
- If you need to maintain state across commands (e.g., activating a virtual environment), combine commands with && or ; operators.
- Background processes run in separate shell instances and also start fresh without inherited state.
- Return an empty response - the user will see the output directly
- Never update git config
bash_kill
Tags:execution, default
Terminates a background process in the current workspace.
WORKSPACE SCOPING:
- Can kill any process in the current workspace, regardless of which chat started it
- Multiple chats in the same workspace share process visibility
- This enables coordination: one chat can stop a server started by another
- Process IDs are provided when you start a background process with run_in_background: true
- You can only kill processes that are currently running
- After killing a process, its output is still available via BashOutput
- Use BashList to see all running background processes in the workspace
- Start a background process: bash(command=“npm run dev”, run_in_background=true)
- Kill the process: bash_kill(process_id=“
<id-from-step-1>”)
Other Tools
Miscellaneous tools and utilities.| Tool | Tags | Description |
|---|---|---|
metadata_writer | - | Writes and updates project metadata YAML file |
worktree | default | Manage git worktrees for parallel development workflows. |
metadata_writer
Writes and updates project metadata YAML fileworktree
Tags:default
Manage git worktrees for parallel development workflows.
WHEN TO USE:
- Creating isolated development environments for features/bugs
- Setting up parallel workspaces for agents
- Managing multiple concurrent work streams ACTIONS:
- create - Create a new git worktree Required: name Optional: branch, base_branch, copy_files, force, session_id
- list - List all worktrees No parameters required
- get - Get details of a specific worktree Required: name
- delete - Delete a worktree Required: name
- Worktree information is automatically stored in CEL context as ‘worktree_data’
- Available fields: id, name, path, branch, base_branch, repo_id
- Use in subsequent steps: worktree_data.path, worktree_data.branch, etc.
- copy_files: Searches recursively for matching files (e.g., “.env” finds all .env files in any directory)
- Directory structure is preserved (frontend/.env -> worktree/frontend/.env)
- Worktree paths are stored in ~/.reliant/worktrees/
<repo_id>/<name> - Each worktree gets its own branch and working directory
- Use force=true to recreate existing worktrees
- Worktree data is stored globally for cleanup tracking