API Reference
AppConfig Root configuration struct for the Appam framework.
AppConfig is the root configuration struct for the Appam framework. It aggregates provider-specific settings, logging, session history, web server, and rate limiting configuration into a single structure that can be loaded from TOML files, environment variables, or constructed programmatically with AppConfigBuilder .
pub struct AppConfig {
pub provider : LlmProvider ,
pub openrouter : OpenRouterConfig ,
pub anthropic : AnthropicConfig ,
pub openai : OpenAIConfig ,
pub openai_codex : OpenAICodexConfig ,
pub vertex : VertexConfig ,
pub logging : LoggingConfig ,
pub history : HistoryConfig ,
pub web : Option < WebConfig >,
}
Field Type Description providerLlmProviderActive LLM provider selection (defaults to OpenRouter) openrouterOpenRouterConfigOpenRouter API configuration anthropicAnthropicConfigAnthropic Messages API configuration openaiOpenAIConfigOpenAI Responses API configuration openai_codexOpenAICodexConfigOpenAI Codex subscription-backed Responses configuration vertexVertexConfigGoogle Vertex AI (Gemini) configuration loggingLoggingConfigLogging and trace configuration historyHistoryConfigSession history persistence settings webOption<WebConfig>Optional web API server configuration
All provider configuration sections are always present (with defaults) so that switching providers only requires changing the provider field. Appam validates that provider-specific features are not misconfigured and emits warnings for incompatibilities.
pub struct LoggingConfig {
pub logs_dir : PathBuf , // default: "logs"
pub human_console : bool , // default: true
pub level : String , // default: "info"
pub log_format : LogFormat , // default: Both
pub enable_logs : bool , // default: false
pub enable_traces : bool , // default: false
pub trace_format : TraceFormat , // default: Detailed
}
Field Default Description logs_dir"logs"Directory for log files and session transcripts human_consoletrueEnable human-readable console output level"info"Log level: trace, debug, info, warn, error log_formatBothLog file format: Plain (.log), Json (.jsonl), or Both enable_logsfalseEnable framework logs (run-*.log, run-*.jsonl). When disabled, logs go to console only. enable_tracesfalseEnable agent session traces (session-*.jsonl, session-*.json). Controls conversation traces, not framework logs. trace_formatDetailedTrace detail level: Compact (essentials only) or Detailed (full including reasoning)
pub enum LogFormat {
Plain , // Human-readable .log files
Json , // Structured .jsonl files
Both , // Both formats simultaneously
}
pub enum TraceFormat {
Compact , // Essential information only
Detailed , // Full details including reasoning
}
pub struct HistoryConfig {
pub enabled : bool , // default: false
pub db_path : PathBuf , // default: "data/sessions.db"
pub auto_save : bool , // default: true
pub max_sessions : Option < usize >, // default: None (unlimited)
}
Field Default Description enabledfalseEnable persistent session history in SQLite db_path"data/sessions.db"Path to the SQLite database file auto_savetrueAutomatically save sessions after completion max_sessionsNoneMaximum sessions to keep (None = unlimited)
pub struct WebConfig {
pub host : String , // default: "0.0.0.0"
pub port : u16 , // default: 3000
pub cors : bool , // default: true
pub rate_limit : Option < RateLimitConfig >,
}
Field Default Description host"0.0.0.0"Host address to bind the web server to port3000Port to listen on corstrueEnable CORS headers rate_limitNoneOptional rate limiting configuration
pub struct RateLimitConfig {
pub requests_per_minute : u64 , // default: 60
pub burst : u32 , // default: 10
}
Field Default Description requests_per_minute60Maximum requests per minute per IP burst10Burst size for rate limiter
use appam :: config :: load_config_from_env;
let config = load_config_from_env () ? ;
Loads defaults and applies environment variable overrides. Does not read any config file. Use this for programmatic agent creation where you want full control.
use appam :: config :: load_global_config;
use std :: path :: Path ;
let config = load_global_config ( Path :: new ( "appam.toml" )) ? ;
Reads the TOML file, then applies environment variable overrides on top. Environment variables always take precedence.
Configuration is layered with later sources overriding earlier ones:
Default values -- Hardcoded defaults for all fields
TOML config file -- Values from the loaded configuration file
Environment variables -- Highest priority, always wins
Variable Description APPAM_PROVIDEROverride provider: anthropic, openrouter, openrouter-completions, openrouter-responses, openai, openai-codex, vertex, azure-openai, azure-anthropic, bedrock
Variable Description OPENROUTER_API_KEYAPI key OPENROUTER_MODELModel identifier OPENROUTER_BASE_URLAPI base URL
Variable Description ANTHROPIC_API_KEYAPI key ANTHROPIC_MODELModel identifier ANTHROPIC_BASE_URLAPI base URL AZURE_ANTHROPIC_MODELAzure Anthropic deployment/model override AZURE_ANTHROPIC_BASE_URLAzure Anthropic base URL AZURE_ANTHROPIC_RESOURCEAzure Anthropic resource name used to derive the base URL AZURE_ANTHROPIC_AUTH_METHODAzure Anthropic auth method override AZURE_ANTHROPIC_API_KEYAzure Anthropic API key for x-api-key auth AZURE_ANTHROPIC_AUTH_TOKENAzure Anthropic bearer token AZURE_API_KEYAzure Anthropic fallback credential used by the client and examples
Variable Description OPENAI_API_KEYOpenAI API key OPENAI_MODELOpenAI model identifier OPENAI_BASE_URLOpenAI API base URL OPENAI_ORGANIZATIONOptional OpenAI organization header OPENAI_PROJECTOptional OpenAI project header AZURE_OPENAI_API_KEYAzure OpenAI API key AZURE_OPENAI_RESOURCEAzure resource name AZURE_OPENAI_API_VERSIONAzure API version AZURE_OPENAI_MODELAzure deployment/model override
Variable Description OPENAI_CODEX_MODELOpenAI Codex model identifier OPENAI_CODEX_BASE_URLOpenAI Codex backend base URL OPENAI_CODEX_ACCESS_TOKENExplicit ChatGPT OAuth access token OPENAI_CODEX_AUTH_FILEOpenAI Codex auth cache file path override
Variable Description GOOGLE_VERTEX_API_KEYAPI key for Vertex/Gemini GOOGLE_VERTEX_ACCESS_TOKENOAuth bearer token GOOGLE_VERTEX_MODELModel identifier GOOGLE_VERTEX_LOCATIONVertex location (e.g., us-central1) GOOGLE_VERTEX_PROJECTGoogle Cloud project ID GOOGLE_VERTEX_BASE_URLAPI base URL GOOGLE_VERTEX_INCLUDE_THOUGHTSEnable thought blocks (true/false) GOOGLE_VERTEX_THINKING_LEVELThinking level hint: LOW, MEDIUM, HIGH
Variable Description AWS_ACCESS_KEY_IDAWS access key for SigV4 auth AWS_SECRET_ACCESS_KEYAWS secret key for SigV4 auth AWS_SESSION_TOKENOptional session token for SigV4 auth AWS_REGIONBedrock region AWS_DEFAULT_REGIONFallback Bedrock region AWS_BEDROCK_MODEL_IDBedrock model identifier override AWS_BEARER_TOKEN_BEDROCKBearer-token auth for non-streaming Bedrock requests
Variable Description APPAM_LOG_LEVELLogging level (trace, debug, info, warn, error) APPAM_LOGS_DIRLogs directory path APPAM_LOG_FORMATLog format (plain, json, both) APPAM_TRACE_FORMATTrace format (compact, detailed) APPAM_ENABLE_LOGSEnable framework logs (true/false) APPAM_ENABLE_TRACESEnable session traces (true/false)
Variable Description APPAM_HISTORY_ENABLEDEnable session history (true/false) APPAM_HISTORY_DB_PATHDatabase file path
provider = "anthropic"
[ anthropic ]
model = "claude-sonnet-4-5"
max_tokens = 8192
[ openrouter ]
model = "openai/gpt-5"
max_output_tokens = 9000
[ openai ]
model = "gpt-4o"
max_output_tokens = 4096
[ openai_codex ]
model = "gpt-5.4"
[ vertex ]
model = "gemini-2.5-flash"
location = "us-central1"
[ logging ]
level = "info"
logs_dir = "logs"
log_format = "both"
enable_logs = false
enable_traces = true
trace_format = "detailed"
[ history ]
enabled = true
db_path = "data/sessions.db"
auto_save = true
max_sessions = 500
[ web ]
host = "0.0.0.0"
port = 3000
cors = true
[ web . rate_limit ]
requests_per_minute = 60
burst = 10
Appam validates configuration at load time and emits warnings when provider-specific features are configured for the wrong provider. For example:
Anthropic thinking configuration with an OpenRouter provider triggers a warning suggesting OpenRouter's reasoning configuration instead
OpenRouter reasoning configuration with an Anthropic provider triggers a warning suggesting Anthropic's thinking configuration
Anthropic caching with OpenAI provider warns about the difference in caching mechanisms
These are warnings, not errors -- the configuration still loads successfully but the incompatible settings will be ignored at runtime.