Models Reference
Models Reference
Reliant supports models from multiple providers. Use the model ID in workflow configurations and presets.
Anthropic (Claude)
| Model ID | Name | Context | Notes |
|---|---|---|---|
claude-4.5-opus | Claude 4.5 Opus | 200K | Latest flagship, best quality |
claude-4.5-sonnet | Claude 4.5 Sonnet | 200K | Fast and capable |
claude-4.5-haiku | Claude 4.5 Haiku | 200K | Fastest, most economical |
claude-4-opus | Claude 4 Opus | 200K | Previous flagship |
claude-4-sonnet | Claude 4 Sonnet | 200K | Balanced performance |
claude-3.7-sonnet | Claude 3.7 Sonnet | 200K | Extended thinking support |
Features: All Claude models support attachments (images, PDFs) and prompt caching.
OpenAI (GPT)
GPT-5.2 Series (Latest)
| Model ID | Name | Context | Notes |
|---|---|---|---|
gpt-5.2 | GPT-5.2 | 400K | Latest flagship |
gpt-5.2-pro | GPT-5.2 Pro | 400K | Enhanced reasoning |
gpt-5.2-mini | GPT-5.2 Mini | 400K | Fast and economical |
GPT-5.1 Series
| Model ID | Name | Context | Notes |
|---|---|---|---|
gpt-5.1 | GPT-5.1 | 256K | |
gpt-5.1-mini | GPT-5.1 Mini | 256K | |
gpt-5.1-nano | GPT-5.1 Nano | 128K | Most economical |
GPT-5 Series
| Model ID | Name | Context | Notes |
|---|---|---|---|
gpt-5 | GPT-5 | 256K | |
gpt-5-mini | GPT-5 Mini | 256K | |
gpt-5-nano | GPT-5 Nano | 128K |
GPT-4.1 Series
| Model ID | Name | Context | Notes |
|---|---|---|---|
gpt-4.1 | GPT-4.1 | 1M | Million token context |
gpt-4.1-mini | GPT-4.1 Mini | 1M | |
gpt-4.1-nano | GPT-4.1 Nano | 1M |
Legacy GPT-4
| Model ID | Name | Context | Notes |
|---|---|---|---|
gpt-4o | GPT-4o | 128K | |
gpt-4o-mini | GPT-4o Mini | 128K | |
gpt-4-turbo | GPT-4 Turbo | 128K | |
gpt-3.5-turbo | GPT-3.5 Turbo | 16K |
OpenAI O-Series (Reasoning)
O-series models are optimized for complex reasoning tasks.
| Model ID | Name | Context | Notes |
|---|---|---|---|
o4-mini | o4-mini | 200K | Latest reasoning model |
o3 | o3 | 200K | |
o3-pro | o3-pro | 200K | Enhanced reasoning |
o3-mini | o3-mini | 200K | Fast reasoning |
o1 | o1 | 200K | |
o1-pro | o1-pro | 128K | |
o1-mini | o1-mini | 128K | |
o1-preview | o1-preview | 128K |
Google (Gemini)
Gemini 3 Series (Preview)
| Model ID | Name | Context | Notes |
|---|---|---|---|
gemini-3-pro-preview | Gemini 3 Pro | 1M | Latest flagship |
gemini-3-flash-preview | Gemini 3 Flash | 1M | Fast |
gemini-3-pro-image-preview | Gemini 3 Pro Image | 64K | Image generation |
Gemini 2.5 Series (Stable)
| Model ID | Name | Context | Notes |
|---|---|---|---|
gemini-2.5-pro | Gemini 2.5 Pro | 1M | |
gemini-2.5-flash | Gemini 2.5 Flash | 1M | |
gemini-2.5-flash-lite | Gemini 2.5 Flash Lite | 1M | Most economical |
Codex
| Model ID | Name | Context | Notes |
|---|---|---|---|
codex-mini-latest | Codex Mini | 192K | Code-optimized |
Using Models
In Workflows
inputs:
model:
type: model
default: claude-4.5-sonnetIn Presets
name: my-preset
tag: agent
params:
model: gpt-5.2Dynamic Selection
Use CEL to select models dynamically:
model: "{{inputs.model != '' ? inputs.model : 'claude-4.5-sonnet'}}"Provider Configuration
Models require the appropriate provider to be configured in Settings → AI:
| Provider | Models |
|---|---|
| Anthropic | claude-* |
| OpenAI | gpt-*, o1-*, o3-*, o4-*, codex-* |
gemini-* | |
| OpenRouter | All models (via routing) |
See API Keys & Providers for setup instructions.