Appam
Core Concepts

Configuration

Hierarchical configuration with TOML files and environment variable overrides.

Overview

Appam uses a layered configuration system where each layer can override the previous:

  1. Defaults -- Sensible built-in values
  2. TOML file -- Global or per-agent configuration file
  3. Environment variables -- Highest priority, override everything

This hierarchy lets you define base settings in a TOML file, then override specific values per environment without changing code.

AppConfig

The top-level configuration struct contains nested configs for each subsystem:

pub struct AppConfig {
    pub provider: LlmProvider,          // Which LLM provider to use
    pub openrouter: OpenRouterConfig,   // OpenRouter-specific settings
    pub anthropic: AnthropicConfig,     // Anthropic-specific settings
    pub openai: OpenAIConfig,           // OpenAI-specific settings
    pub openai_codex: OpenAICodexConfig, // OpenAI Codex-specific settings
    pub vertex: VertexConfig,           // Google Vertex-specific settings
    pub logging: LoggingConfig,         // Logging configuration
    pub history: HistoryConfig,         // Session history configuration
    pub web: Option<WebConfig>,         // Web server configuration (optional)
}

Loading Configuration

From Environment Only

For programmatic agent creation where you do not want automatic file loading:

use appam::config::load_config_from_env;

let config = load_config_from_env()?;

This starts from defaults and applies environment variable overrides.

From TOML File

Load from a specific file, then apply environment overrides:

use appam::config::load_global_config;
use std::path::Path;

let config = load_global_config(Path::new("appam.toml"))?;

TOML File Format

A complete appam.toml example:

provider = "anthropic"

[anthropic]
model = "claude-sonnet-4-5"
max_tokens = 8192

[openai]
model = "gpt-4o"
max_output_tokens = 4096

[openai_codex]
model = "gpt-5.4"

[openrouter]
model = "openai/gpt-5"
max_output_tokens = 9000

[vertex]
model = "gemini-2.5-flash"
location = "us-central1"

[logging]
level = "info"
logs_dir = "logs"
enable_logs = false
enable_traces = false
log_format = "both"          # plain | json | both
trace_format = "detailed"    # compact | detailed
human_console = true

[history]
enabled = true
db_path = "data/sessions.db"
auto_save = true
max_sessions = 1000

[web]
host = "0.0.0.0"
port = 3000
cors = true

[web.rate_limit]
requests_per_minute = 60
burst = 10

Only include the sections you need. Unspecified values use defaults.

AppConfigBuilder

Build configuration programmatically with the fluent builder:

use appam::prelude::*;
use appam::config::AppConfigBuilder;

let config = AppConfigBuilder::new()
    .openrouter_api_key("sk-or-v1-...")
    .model("openai/gpt-5")
    .log_level("debug")
    .logs_dir("./my-logs")
    .enable_history("data/sessions.db")
    .history_max_sessions(500)
    .web_host("127.0.0.1")
    .web_port(8080)
    .web_cors(true)
    .rate_limit_rpm(100)
    .rate_limit_burst(20)
    .build();

Builder Methods

MethodDescription
openrouter_api_key(str)Set the OpenRouter API key
openrouter_base_url(str)Set the OpenRouter base URL
model(str)Set the default model
log_level(str)Set logging level (trace, debug, info, warn, error)
logs_dir(path)Set the logs directory
human_console(bool)Enable/disable human-readable console output
log_format(LogFormat)Set log file format (Plain, Json, Both)
enable_logs(bool)Enable/disable framework log files
enable_traces(bool)Enable/disable agent session trace files
trace_format(TraceFormat)Set trace detail level (Compact, Detailed)
enable_history(path)Enable session history with a database path
history_auto_save(bool)Enable/disable automatic session saving
history_max_sessions(usize)Set max sessions to retain
web_host(str)Set web server bind address
web_port(u16)Set web server port
web_cors(bool)Enable/disable CORS
rate_limit_rpm(u64)Requests per minute per IP
rate_limit_burst(u32)Burst request allowance

AgentConfigBuilder

Build per-agent TOML configurations programmatically:

use appam::config::AgentConfigBuilder;

let config = AgentConfigBuilder::new("my-agent")
    .model("openai/gpt-5")
    .system_prompt("prompts/assistant.txt")
    .description("A helpful coding assistant")
    .version("1.0.0")
    .add_python_tool("echo", "tools/echo.json", "tools/echo.py")
    .add_rust_tool("bash", "tools/bash.json", "appam::tools::builtin::bash")
    .build()?;

// Or save directly to a TOML file
AgentConfigBuilder::new("my-agent")
    .model("anthropic/claude-sonnet-4-5")
    .system_prompt("prompts/assistant.txt")
    .save_to_file("agents/my-agent.toml")?;

Environment Variable Overrides

Environment variables take highest priority and override both defaults and TOML settings. All Appam-specific variables are prefixed with APPAM_.

Provider Selection

VariableDescriptionExample
APPAM_PROVIDEROverride the LLM provideranthropic, openrouter, openrouter-completions, openrouter-responses, openai, openai-codex, vertex, azure-openai, azure-anthropic, bedrock

Provider API Keys

VariableProvider
ANTHROPIC_API_KEYAnthropic
OPENAI_API_KEYOpenAI
OPENROUTER_API_KEYOpenRouter
GOOGLE_VERTEX_API_KEYGoogle Vertex (API key auth)
GOOGLE_VERTEX_ACCESS_TOKENGoogle Vertex (OAuth bearer)
AZURE_OPENAI_API_KEYAzure OpenAI
AZURE_API_KEYAzure Anthropic fallback credential
AZURE_ANTHROPIC_API_KEYAzure Anthropic API key
AZURE_ANTHROPIC_AUTH_TOKENAzure Anthropic bearer token
AWS_ACCESS_KEY_IDAWS Bedrock (SigV4)
AWS_SECRET_ACCESS_KEYAWS Bedrock (SigV4)
AWS_BEARER_TOKEN_BEDROCKAWS Bedrock (bearer)

Provider Models and Endpoints

VariableDescription
OPENAI_MODELOpenAI model identifier
OPENAI_BASE_URLOpenAI API base URL
OPENAI_ORGANIZATIONOptional OpenAI organization header
OPENAI_PROJECTOptional OpenAI project header
OPENAI_CODEX_MODELOpenAI Codex model identifier
OPENAI_CODEX_BASE_URLOpenAI Codex backend base URL
OPENAI_CODEX_ACCESS_TOKENExplicit ChatGPT OAuth access token
OPENAI_CODEX_AUTH_FILEOpenAI Codex auth cache file path
AZURE_OPENAI_MODELAzure deployment/model override
OPENROUTER_MODELOpenRouter model identifier
OPENROUTER_BASE_URLOpenRouter API base URL
ANTHROPIC_MODELAnthropic model identifier
ANTHROPIC_BASE_URLAnthropic API base URL
AZURE_ANTHROPIC_MODELAzure Anthropic deployment/model override
AZURE_ANTHROPIC_BASE_URLAzure Anthropic base URL
AZURE_ANTHROPIC_RESOURCEAzure Anthropic resource name used to derive the base URL
AZURE_ANTHROPIC_AUTH_METHODAzure Anthropic auth method (x_api_key or bearer)
GOOGLE_VERTEX_MODELVertex/Gemini model identifier
GOOGLE_VERTEX_LOCATIONVertex region (e.g., us-central1)
GOOGLE_VERTEX_PROJECTGoogle Cloud project ID
GOOGLE_VERTEX_BASE_URLVertex API base URL
GOOGLE_VERTEX_INCLUDE_THOUGHTSEnable thought blocks (true/false)
GOOGLE_VERTEX_THINKING_LEVELThinking level hint (LOW, MEDIUM, HIGH)
AZURE_OPENAI_RESOURCEAzure resource name
AZURE_OPENAI_API_VERSIONAzure API version
AWS_REGIONAWS region for Bedrock
AWS_BEDROCK_MODEL_IDBedrock model identifier

Logging

VariableDescriptionDefault
APPAM_LOG_LEVELLog levelinfo
APPAM_LOGS_DIRLogs directory pathlogs
APPAM_LOG_FORMATLog file format (plain, json, both)both
APPAM_TRACE_FORMATTrace detail level (compact, detailed)detailed
APPAM_ENABLE_LOGSEnable framework log filesfalse
APPAM_ENABLE_TRACESEnable agent session tracesfalse

Session History

VariableDescriptionDefault
APPAM_HISTORY_ENABLEDEnable session persistencefalse
APPAM_HISTORY_DB_PATHSQLite database pathdata/sessions.db

Nested Configuration Structs

LoggingConfig

pub struct LoggingConfig {
    pub logs_dir: PathBuf,        // Default: "logs"
    pub human_console: bool,      // Default: true
    pub level: String,            // Default: "info"
    pub log_format: LogFormat,    // Default: Both
    pub enable_logs: bool,        // Default: false
    pub enable_traces: bool,      // Default: false
    pub trace_format: TraceFormat, // Default: Detailed
}

HistoryConfig

pub struct HistoryConfig {
    pub enabled: bool,               // Default: false
    pub db_path: PathBuf,            // Default: "data/sessions.db"
    pub auto_save: bool,             // Default: true
    pub max_sessions: Option<usize>, // Default: None (unlimited)
}

WebConfig

pub struct WebConfig {
    pub host: String,                       // Default: "0.0.0.0"
    pub port: u16,                          // Default: 3000
    pub cors: bool,                         // Default: true
    pub rate_limit: Option<RateLimitConfig>, // Default: None
}

RateLimitConfig

pub struct RateLimitConfig {
    pub requests_per_minute: u64, // Default: 60
    pub burst: u32,               // Default: 10
}

Provider Feature Validation

Appam validates provider-specific features at startup and emits warnings for incompatible configurations. For example:

  • Using Anthropic extended thinking with OpenRouter emits a warning suggesting OpenRouter's reasoning config instead
  • Using OpenRouter attribution headers with Anthropic emits a warning that they will be ignored
  • Using Anthropic prompt caching with OpenAI emits a warning about the different caching mechanism

These are warnings, not errors. The configuration still loads, but incompatible provider-specific settings are ignored by the selected backend.