Skip to main content
Reliant supports models from multiple providers. Use the model ID in workflow configurations and presets.
Note: Reliant has been most extensively tested with Anthropic and OpenAI models. You may encounter issues when using other providers.

Model Tags

Models are tagged to help with selection. You can also bring your own model with your own provider.
TagDescription
flagshipMost capable models, best quality
moderateBalance of capability and cost
fastOptimized for speed
cheapLow cost per token
reasoningExtended thinking capabilities

Capabilities Legend

SymbolMeaning
πŸ› οΈSupports tool use
πŸ“ŽSupports attachments (images, PDFs)
πŸ’ΎSupports prompt caching
⚑Supports streaming
πŸ’‘Extended reasoning/thinking

Anthropic (Claude)

Model IDNameTagsContextCapabilitiesProviders
claude-4.5-haikuClaude 4.5 Haikufast, cheap200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑anthropic, openrouter
claude-4.5-opusClaude 4.5 Opusflagship, reasoning200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘anthropic, openrouter
claude-4.5-sonnetClaude 4.5 Sonnetmoderate, reasoning200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘anthropic, openrouter
claude-4.6-opusClaude 4.6 Opusflagship, reasoning200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘anthropic, openrouter
claude-4.6-sonnetClaude 4.6 Sonnetmoderate, reasoning200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘anthropic, openrouter
Note: All Claude models support attachments (images, PDFs) and prompt caching.

OpenAI (GPT)

Model IDNameTagsContextCapabilitiesProviders
gpt-5-miniGPT-5 Minifast, cheap400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘openai, openrouter
gpt-5.2GPT-5.2moderate, reasoning400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘openai, openrouter
gpt-5.2-codexGPT-5.2 Codexflagship, reasoning400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘codex, openai, openrouter
gpt-5.2-proGPT-5.2 Promoderate, reasoning400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘openai, openrouter
gpt-5.3-codexGPT-5.3 Codexflagship, reasoning400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘codex, openai, openrouter
gpt-5.3-codex-sparkGPT-5.3 Codex Sparkfast, reasoning128KπŸ› οΈ πŸ’Ύ ⚑ πŸ’‘codex
gpt-5.4GPT-5.4flagship, reasoning1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘codex, openai, openrouter
gpt-5.4-proGPT-5.4 Proflagship, reasoning1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘openai
Note: OpenAI’s GPT models with varying context windows and capabilities.

Google (Gemini)

Model IDNameTagsContextCapabilitiesProviders
gemini-2.5-flashGemini 2.5 Flashfast, cheap1MπŸ› οΈ πŸ“Ž ⚑gemini, openrouter
gemini-2.5-proGemini 2.5 Promoderate, reasoning1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘gemini, openrouter
gemini-3-flash-previewGemini 3 Flash Previewfast1MπŸ› οΈ πŸ“Ž ⚑gemini, openrouter
gemini-3-pro-previewGemini 3 Pro Previewflagship, reasoning1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘gemini, openrouter
gemini-3.1-pro-previewGemini 3.1 Pro Previewflagship, reasoning1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘gemini, openrouter
gemini-3.1-pro-preview-customtoolsGemini 3.1 Pro Preview (Custom Tools)reasoning1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘gemini
Note: Google’s Gemini models with large context windows.

Models by Tag

Find models by capability. Use these tables to answer β€œwhat are my options for X?”

Flagship

Most capable models for complex tasks
ModelProviderContextCapabilities
claude-4.6-opusAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
claude-4.5-opusAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.4OpenAI1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.4-proOpenAI1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘
gpt-5.3-codexOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.2-codexOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-3.1-pro-previewGoogle1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-3-pro-previewGoogle1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘

Moderate

Balance of capability and cost
ModelProviderContextCapabilities
claude-4.6-sonnetAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
claude-4.5-sonnetAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.2-proOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.2OpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-2.5-proGoogle1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘

Fast

Optimized for quick responses
ModelProviderContextCapabilities
gpt-5.3-codex-sparkOpenAI128KπŸ› οΈ πŸ’Ύ ⚑ πŸ’‘
claude-4.5-haikuAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑
gpt-5-miniOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-3-flash-previewGoogle1MπŸ› οΈ πŸ“Ž ⚑
gemini-2.5-flashGoogle1MπŸ› οΈ πŸ“Ž ⚑

Cheap

Low cost per token
ModelProviderContextCapabilities
claude-4.5-haikuAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑
gpt-5-miniOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-2.5-flashGoogle1MπŸ› οΈ πŸ“Ž ⚑

Reasoning

Extended thinking capabilities
ModelProviderContextCapabilities
claude-4.6-opusAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
claude-4.5-opusAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.4OpenAI1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.4-proOpenAI1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘
gpt-5.3-codexOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.3-codex-sparkOpenAI128KπŸ› οΈ πŸ’Ύ ⚑ πŸ’‘
gpt-5.2-codexOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-3.1-pro-previewGoogle1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-3-pro-previewGoogle1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘
gemini-3.1-pro-preview-customtoolsGoogle1MπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
claude-4.6-sonnetAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
claude-4.5-sonnetAnthropic200KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.2-proOpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gpt-5.2OpenAI400KπŸ› οΈ πŸ“Ž πŸ’Ύ ⚑ πŸ’‘
gemini-2.5-proGoogle1MπŸ› οΈ πŸ“Ž ⚑ πŸ’‘

Using Models

Select models by capability tags for provider-agnostic workflows:
# In presets or workflow nodes
model:
  tags: [flagship]      # Best quality model available
  
model:
  tags: [fast, cheap]   # Prefer fast+cheap, falls back to fast-only or cheap-only

model:
  tags: [local, fast]   # Prefer local, use fast cloud model as fallback

Explicit Model ID

Select a specific model by ID:
model:
  id: claude-4.5-sonnet

In Workflow Inputs

Workflow inputs use model ID strings (for UI dropdown):
inputs:
  model:
    type: model
    default: claude-4.5-sonnet  # Model ID string for input default

In Presets

Presets can use tag-based selection:
name: my-preset
tag: agent
params:
  model:
    tags: [flagship]  # Works with any provider

Provider Configuration

Models require the appropriate provider to be configured in Settings β†’ AI:
ProviderModels
Anthropicclaude-*
OpenAIgpt-*
Googlegemini-*
See API Keys & Providers for setup instructions.