Skip to main content
Monolith mode runs every Reliant component — Temporal workflow engine, SQLite databases, HTTP API, gRPC server, and tools daemon — inside a single OS process. This is the deployment mode the Electron desktop app uses: one binary, no external dependencies, everything local.
reliant monolith [flags]
FlagDefaultDescription
--data-dir./data or RELIANT_DATA_DIRBase directory for databases, logs, and certificates
When the process starts, it initializes each embedded component in sequence, prints a status banner with the active ports, and blocks until it receives SIGTERM, SIGINT, or detects its parent process has exited (the “suicide pact” pattern used when launched by Electron).

Embedded components

The monolith bundles six subsystems that would be separate services in a distributed deployment.

Temporal workflow engine

An embedded Temporal server backed by SQLite at data/temporal.db. It registers two namespaces — default and reliant — and runs Temporal workers in-process so workflow activities execute without network round-trips. The frontend port is auto-assigned by default (finds a free port at startup) but can be pinned with the TEMPORAL_FRONTEND_PORT environment variable. Auto-assignment is the normal mode; pinning is only needed when external tooling must connect to a known port.

Application database

A SQLite database at data/reliant.db stores all application state: chats, messages, content blocks, workflows, projects, and worktrees. The schema is managed by embedded migrations that run automatically on startup.

HTTP API server

Serves REST endpoints for the web frontend. Defaults to port 8080 (API_PORT). The API server handles CORS, JWT authentication, and optionally TLS — the same TLS configuration described below applies to both the HTTP and gRPC servers.

gRPC / ConnectRPC server

Provides real-time streaming and RPC for the desktop client. Defaults to port 9090 (GRPC_PORT). Uses ConnectRPC for browser compatibility (gRPC-Web and Connect protocols over HTTP/2). This is where chat streaming, workflow execution updates, and user/chat update subscriptions are served.

Tools daemon

Runs in-process on port 9190 (TOOLS_DAEMON_PORT). The daemon hosts the tool execution runtime — file operations, shell commands, LSP integration, MCP server connections — and exposes them over gRPC so workflow activities can invoke tools. The daemon uses a LazyDaemonStarter lifecycle: if a user is already signed in when the monolith boots, the daemon starts eagerly. If no user session exists yet (first launch, signed-out state), it starts lazily on the first authenticated request. A LocalDaemonRouter handles in-process tool execution routing, avoiding the network hop that a distributed deployment would require.

Memory-based streaming

Real-time updates for chat events and user state changes flow through MemoryUpdateHub instances — in-memory fan-out channels that push events to subscribed gRPC streams. No external message broker (Redis, NATS, etc.) is involved. This keeps the monolith self-contained but means streaming state is ephemeral and scoped to the lifetime of the process.

TLS configuration

TLS is evaluated in priority order. The first matching condition wins:
PriorityConditionBehavior
1DISABLE_TLS=truePlain HTTP, no TLS. Useful behind a reverse proxy or in trusted networks.
2TLS_CERT_FILE + TLS_KEY_FILE setUses the provided certificate and key files. Typical for local development with mkcert certificates.
3Neither of the aboveAuto-generates a self-signed certificate in data/certs/ and uses it for all servers. The certificate is persisted so subsequent restarts reuse it.
Both the HTTP API and gRPC servers share the same TLS configuration. When TLS is enabled, the gRPC server runs HTTP/2 over TLS; when disabled, it falls back to h2c (HTTP/2 cleartext).

PID locking

The monolith acquires a file-based PID lock (data/.reliant-backend.lock) before starting any component. This prevents multiple backend processes from running against the same data directory simultaneously, which would corrupt the SQLite and Temporal databases. If the lock is already held, the process retries up to 3 times with a 2-second delay between attempts. If the holder is a stale process (no longer running), the lock is reclaimed automatically. If a live process holds the lock, startup fails with an error message identifying the conflicting PID.

Logging

Structured logs are written to data/logs/reliant.log using automatic rotation:
SettingValue
Max file size50 MB
Rotated backups kept3
Max age30 days
CompressionEnabled (gzip)
The log level is controlled by the standard LOG_LEVEL environment variable. In production mode, the embedded Temporal server’s log level is reduced to warn to cut noise.

Environment variables

VariableDefaultDescription
RELIANT_DATA_DIR./dataBase data directory for databases, logs, and certs
API_PORT8080HTTP API server port
GRPC_PORT9090gRPC/ConnectRPC server port
TOOLS_DAEMON_PORT9190In-process tools daemon port
TEMPORAL_FRONTEND_PORTAuto-assignedTemporal frontend port. Set to pin a specific port.
BIND_ADDRESS127.0.0.1Network bind address for all servers
CORS_ALLOWED_ORIGINS*Comma-separated list of allowed CORS origins, or * for wildcard
DISABLE_TLSSet to true to disable TLS across all servers
TLS_CERT_FILEPath to a PEM-encoded TLS certificate
TLS_KEY_FILEPath to the corresponding TLS private key
PPROF_PORT6060Port for the pprof debug/profiling server (localhost only)

Data directory layout

data/
├── reliant.db                          # Application database (SQLite)
├── temporal.db                         # Temporal workflow engine state (SQLite)
├── .reliant-backend.lock               # PID lock file
├── thinking_capability_matrix.json     # Model thinking capability metadata
├── certs/                              # Auto-generated TLS certificates
│   ├── cert.pem
│   └── key.pem
└── logs/
    └── reliant.log                     # Structured application logs

Shutdown behavior

The monolith shuts down gracefully when it receives SIGINT, SIGTERM, or detects its parent process has exited (stdin EOF). The shutdown sequence runs with a 20-second timeout and proceeds in order:
  1. Kill all background shell processes to prevent orphans
  2. Stop the process monitor
  3. Stop the gRPC server (drains active streams)
  4. Shut down the tools daemon
  5. Stop the HTTP API server
  6. Stop the integration server (Temporal engine and workers)
  7. Flush analytics and error reporting
  8. Release the PID lock