Configuration
Hierarchical configuration with TOML files and environment variable overrides.
Overview
Appam uses a layered configuration system where each layer can override the previous:
- Defaults -- Sensible built-in values
- TOML file -- Global or per-agent configuration file
- Environment variables -- Highest priority, override everything
This hierarchy lets you define base settings in a TOML file, then override specific values per environment without changing code.
AppConfig
The top-level configuration struct contains nested configs for each subsystem:
pub struct AppConfig {
pub provider: LlmProvider, // Which LLM provider to use
pub openrouter: OpenRouterConfig, // OpenRouter-specific settings
pub anthropic: AnthropicConfig, // Anthropic-specific settings
pub openai: OpenAIConfig, // OpenAI-specific settings
pub openai_codex: OpenAICodexConfig, // OpenAI Codex-specific settings
pub vertex: VertexConfig, // Google Vertex-specific settings
pub logging: LoggingConfig, // Logging configuration
pub history: HistoryConfig, // Session history configuration
pub web: Option<WebConfig>, // Web server configuration (optional)
}Loading Configuration
From Environment Only
For programmatic agent creation where you do not want automatic file loading:
use appam::config::load_config_from_env;
let config = load_config_from_env()?;This starts from defaults and applies environment variable overrides.
From TOML File
Load from a specific file, then apply environment overrides:
use appam::config::load_global_config;
use std::path::Path;
let config = load_global_config(Path::new("appam.toml"))?;TOML File Format
A complete appam.toml example:
provider = "anthropic"
[anthropic]
model = "claude-sonnet-4-5"
max_tokens = 8192
[openai]
model = "gpt-4o"
max_output_tokens = 4096
[openai_codex]
model = "gpt-5.4"
[openrouter]
model = "openai/gpt-5"
max_output_tokens = 9000
[vertex]
model = "gemini-2.5-flash"
location = "us-central1"
[logging]
level = "info"
logs_dir = "logs"
enable_logs = false
enable_traces = false
log_format = "both" # plain | json | both
trace_format = "detailed" # compact | detailed
human_console = true
[history]
enabled = true
db_path = "data/sessions.db"
auto_save = true
max_sessions = 1000
[web]
host = "0.0.0.0"
port = 3000
cors = true
[web.rate_limit]
requests_per_minute = 60
burst = 10Only include the sections you need. Unspecified values use defaults.
AppConfigBuilder
Build configuration programmatically with the fluent builder:
use appam::prelude::*;
use appam::config::AppConfigBuilder;
let config = AppConfigBuilder::new()
.openrouter_api_key("sk-or-v1-...")
.model("openai/gpt-5")
.log_level("debug")
.logs_dir("./my-logs")
.enable_history("data/sessions.db")
.history_max_sessions(500)
.web_host("127.0.0.1")
.web_port(8080)
.web_cors(true)
.rate_limit_rpm(100)
.rate_limit_burst(20)
.build();Builder Methods
| Method | Description |
|---|---|
openrouter_api_key(str) | Set the OpenRouter API key |
openrouter_base_url(str) | Set the OpenRouter base URL |
model(str) | Set the default model |
log_level(str) | Set logging level (trace, debug, info, warn, error) |
logs_dir(path) | Set the logs directory |
human_console(bool) | Enable/disable human-readable console output |
log_format(LogFormat) | Set log file format (Plain, Json, Both) |
enable_logs(bool) | Enable/disable framework log files |
enable_traces(bool) | Enable/disable agent session trace files |
trace_format(TraceFormat) | Set trace detail level (Compact, Detailed) |
enable_history(path) | Enable session history with a database path |
history_auto_save(bool) | Enable/disable automatic session saving |
history_max_sessions(usize) | Set max sessions to retain |
web_host(str) | Set web server bind address |
web_port(u16) | Set web server port |
web_cors(bool) | Enable/disable CORS |
rate_limit_rpm(u64) | Requests per minute per IP |
rate_limit_burst(u32) | Burst request allowance |
AgentConfigBuilder
Build per-agent TOML configurations programmatically:
use appam::config::AgentConfigBuilder;
let config = AgentConfigBuilder::new("my-agent")
.model("openai/gpt-5")
.system_prompt("prompts/assistant.txt")
.description("A helpful coding assistant")
.version("1.0.0")
.add_python_tool("echo", "tools/echo.json", "tools/echo.py")
.add_rust_tool("bash", "tools/bash.json", "appam::tools::builtin::bash")
.build()?;
// Or save directly to a TOML file
AgentConfigBuilder::new("my-agent")
.model("anthropic/claude-sonnet-4-5")
.system_prompt("prompts/assistant.txt")
.save_to_file("agents/my-agent.toml")?;Environment Variable Overrides
Environment variables take highest priority and override both defaults and TOML settings. All Appam-specific variables are prefixed with APPAM_.
Provider Selection
| Variable | Description | Example |
|---|---|---|
APPAM_PROVIDER | Override the LLM provider | anthropic, openrouter, openrouter-completions, openrouter-responses, openai, openai-codex, vertex, azure-openai, azure-anthropic, bedrock |
Provider API Keys
| Variable | Provider |
|---|---|
ANTHROPIC_API_KEY | Anthropic |
OPENAI_API_KEY | OpenAI |
OPENROUTER_API_KEY | OpenRouter |
GOOGLE_VERTEX_API_KEY | Google Vertex (API key auth) |
GOOGLE_VERTEX_ACCESS_TOKEN | Google Vertex (OAuth bearer) |
AZURE_OPENAI_API_KEY | Azure OpenAI |
AZURE_API_KEY | Azure Anthropic fallback credential |
AZURE_ANTHROPIC_API_KEY | Azure Anthropic API key |
AZURE_ANTHROPIC_AUTH_TOKEN | Azure Anthropic bearer token |
AWS_ACCESS_KEY_ID | AWS Bedrock (SigV4) |
AWS_SECRET_ACCESS_KEY | AWS Bedrock (SigV4) |
AWS_BEARER_TOKEN_BEDROCK | AWS Bedrock (bearer) |
Provider Models and Endpoints
| Variable | Description |
|---|---|
OPENAI_MODEL | OpenAI model identifier |
OPENAI_BASE_URL | OpenAI API base URL |
OPENAI_ORGANIZATION | Optional OpenAI organization header |
OPENAI_PROJECT | Optional OpenAI project header |
OPENAI_CODEX_MODEL | OpenAI Codex model identifier |
OPENAI_CODEX_BASE_URL | OpenAI Codex backend base URL |
OPENAI_CODEX_ACCESS_TOKEN | Explicit ChatGPT OAuth access token |
OPENAI_CODEX_AUTH_FILE | OpenAI Codex auth cache file path |
AZURE_OPENAI_MODEL | Azure deployment/model override |
OPENROUTER_MODEL | OpenRouter model identifier |
OPENROUTER_BASE_URL | OpenRouter API base URL |
ANTHROPIC_MODEL | Anthropic model identifier |
ANTHROPIC_BASE_URL | Anthropic API base URL |
AZURE_ANTHROPIC_MODEL | Azure Anthropic deployment/model override |
AZURE_ANTHROPIC_BASE_URL | Azure Anthropic base URL |
AZURE_ANTHROPIC_RESOURCE | Azure Anthropic resource name used to derive the base URL |
AZURE_ANTHROPIC_AUTH_METHOD | Azure Anthropic auth method (x_api_key or bearer) |
GOOGLE_VERTEX_MODEL | Vertex/Gemini model identifier |
GOOGLE_VERTEX_LOCATION | Vertex region (e.g., us-central1) |
GOOGLE_VERTEX_PROJECT | Google Cloud project ID |
GOOGLE_VERTEX_BASE_URL | Vertex API base URL |
GOOGLE_VERTEX_INCLUDE_THOUGHTS | Enable thought blocks (true/false) |
GOOGLE_VERTEX_THINKING_LEVEL | Thinking level hint (LOW, MEDIUM, HIGH) |
AZURE_OPENAI_RESOURCE | Azure resource name |
AZURE_OPENAI_API_VERSION | Azure API version |
AWS_REGION | AWS region for Bedrock |
AWS_BEDROCK_MODEL_ID | Bedrock model identifier |
Logging
| Variable | Description | Default |
|---|---|---|
APPAM_LOG_LEVEL | Log level | info |
APPAM_LOGS_DIR | Logs directory path | logs |
APPAM_LOG_FORMAT | Log file format (plain, json, both) | both |
APPAM_TRACE_FORMAT | Trace detail level (compact, detailed) | detailed |
APPAM_ENABLE_LOGS | Enable framework log files | false |
APPAM_ENABLE_TRACES | Enable agent session traces | false |
Session History
| Variable | Description | Default |
|---|---|---|
APPAM_HISTORY_ENABLED | Enable session persistence | false |
APPAM_HISTORY_DB_PATH | SQLite database path | data/sessions.db |
Nested Configuration Structs
LoggingConfig
pub struct LoggingConfig {
pub logs_dir: PathBuf, // Default: "logs"
pub human_console: bool, // Default: true
pub level: String, // Default: "info"
pub log_format: LogFormat, // Default: Both
pub enable_logs: bool, // Default: false
pub enable_traces: bool, // Default: false
pub trace_format: TraceFormat, // Default: Detailed
}HistoryConfig
pub struct HistoryConfig {
pub enabled: bool, // Default: false
pub db_path: PathBuf, // Default: "data/sessions.db"
pub auto_save: bool, // Default: true
pub max_sessions: Option<usize>, // Default: None (unlimited)
}WebConfig
pub struct WebConfig {
pub host: String, // Default: "0.0.0.0"
pub port: u16, // Default: 3000
pub cors: bool, // Default: true
pub rate_limit: Option<RateLimitConfig>, // Default: None
}RateLimitConfig
pub struct RateLimitConfig {
pub requests_per_minute: u64, // Default: 60
pub burst: u32, // Default: 10
}Provider Feature Validation
Appam validates provider-specific features at startup and emits warnings for incompatible configurations. For example:
- Using Anthropic extended thinking with OpenRouter emits a warning suggesting OpenRouter's reasoning config instead
- Using OpenRouter attribution headers with Anthropic emits a warning that they will be ignored
- Using Anthropic prompt caching with OpenAI emits a warning about the different caching mechanism
These are warnings, not errors. The configuration still loads, but incompatible provider-specific settings are ignored by the selected backend.