All configuration lives in a single TOML file. Default location: ~/.denkeeper/denkeeper.toml.
[telegram]
Key
Type
Default
Description
token
string
required
Bot token from @BotFather
allowed_users
int[]
required
Telegram user IDs allowed to interact
[discord]
Key
Type
Default
Description
token
string
required
Discord bot token
allowed_users
string[]
required
Discord user snowflake IDs
[llm]
Key
Type
Default
Description
default_provider
string
"openrouter"
Name of the provider instance to use by default (must match a configured instance name)
default_model
string
—
Model identifier (format depends on provider)
cost_limit_soft
float
0
Soft cost limit per session in USD (warns but continues)
cost_limit_hard
float
1.0
Hard cost limit per session in USD (stops generation)
[[llm.providers]]
Named provider instances. Multiple entries of the same type are allowed, enabling e.g. OpenAI + a local LM Studio endpoint simultaneously. Each instance is addressable by its unique name.
Key
Type
Description
name
string
Unique instance name (used in default_provider and per-agent llm_provider)
type
string
Provider type: "anthropic", "openai", "openrouter", or "ollama"
api_key
string
API key (required for all types except ollama)
base_url
string
API endpoint override (useful for Azure, vLLM, LM Studio, etc.)
organization
string
OpenAI organization ID (openai type only)
[[llm.providers]]
name = "openai"
type = "openai"
api_key = "sk-..."
[[llm.providers]]
name = "lmstudio"
type = "openai"
base_url = "http://localhost:1234/v1"
api_key = "lm-studio"
Legacy single-slot syntax ([llm.openai], [llm.anthropic], etc.) is still supported and auto-converted at startup. The two styles can coexist; an explicit [[llm.providers]] entry with the same name takes precedence.
[llm.openrouter](legacy)
Key
Type
Default
Description
api_key
string
required
OpenRouter API key
[llm.anthropic](legacy)
Key
Type
Default
Description
api_key
string
required
Anthropic API key (sk-ant-...)
base_url
string
"https://api.anthropic.com"
API endpoint override
[llm.ollama](legacy)
Key
Type
Default
Description
base_url
string
"http://localhost:11434"
Ollama server URL
[llm.openai](legacy)
Key
Type
Default
Description
api_key
string
required
OpenAI API key
base_url
string
"https://api.openai.com/v1"
API endpoint override (for Azure OpenAI, vLLM, LiteLLM, etc.)
organization
string
—
OpenAI organization ID (optional)
Compatible with any endpoint that speaks the OpenAI Chat Completions API format.
[[llm.fallback]]
Key
Type
Description
trigger
string
"cost_limit", "rate_limit", or "error"
action
string
"switch_provider", "switch_model", or "wait_and_retry"
provider
string
Target provider (for switch_provider)
model
string
Target model (for switch_model)
scope
string
"soft" or "hard" (for cost_limit) — which agent cost limit triggers the swap
max_retries
int
Max retry count (for wait_and_retry)
backoff
string
"exponential" (default) or "constant"
cost_limit rules consume the agent’s cost_limit_soft / cost_limit_hard (resolved via [[agents]] overrides or the global [llm] defaults). Legacy low_funds rules with a threshold field auto-migrate to cost_limit + scope = "soft" on load.
Override default provider (must match a configured provider instance name)
llm_model
string
—
Override default model
session_tier
string
—
Override default permission tier
cost_limit_soft
float
—
Per-agent soft cost limit in USD (overrides global)
cost_limit_hard
float
—
Per-agent hard cost limit in USD (overrides global)
supervisor
string
—
Name of another agent that auto-reviews tool calls before they reach you (supervised tier only; supervisor must be autonomous or restricted, not itself supervised)
supervisor_timeout
string
"30s"
Max wait for the supervisor’s LLM review. Go duration format (30s, 1m, 90s). On timeout, falls through to human approval.
supervisor_context_messages
int
5
Number of recent conversation messages passed to the supervisor as context.
[memory]
Key
Type
Default
Description
db_path
string
"~/.denkeeper/data/memory.db"
SQLite database path
[log]
Key
Type
Default
Description
level
string
"info"
"debug", "info", "warn", "error"
format
string
"text"
"text" or "json"
[voice]
Key
Type
Default
Description
stt_provider
string
—
Speech-to-text provider ("openai")
tts_provider
string
—
Text-to-speech provider ("openai")
tts_voice
string
"alloy"
Voice name
auto_voice_reply
bool
false
Reply with voice when user sends voice
[voice.openai]
Key
Type
Default
Description
api_key
string
required
OpenAI API key for STT/TTS
[api]
Key
Type
Default
Description
enabled
bool
true
Enable the REST API server and web dashboard
listen
string
":8080"
Bind address
tls
bool
false
Enable HTTPS
cert_file
string
—
TLS certificate path
key_file
string
—
TLS private key path
cors_origins
string[]
—
Allowed CORS origins
rate_limit
float
0
Max requests/sec per API key
websocket_enabled
bool
true
Enable the WebSocket endpoint (GET /api/v1/ws)
websocket_max_connections
int
0
Maximum concurrent WebSocket connections (0 = unlimited)
websocket_replay_buffer_ttl
string
"5m"
How long to buffer events for replay after a client disconnects
external_url
string
—
Publicly-reachable base URL (used for OAuth callback URLs; defaults to http(s)://<listen>)
[[schedules]]
Key
Type
Default
Description
name
string
required
Unique schedule name
type
string
required
"system" or "agent"
schedule
string
required
Cron expression, interval, or named schedule
skill
string
—
Skill to invoke
agent
string
"default"
Target agent
session_tier
string
"supervised"
Permission tier for this schedule
channel
string
—
Delivery channel (e.g., "telegram:12345")
tags
string[]
—
Freeform labels
enabled
bool
true
Enable/disable without removing
[plugins.*]
Key
Type
Default
Description
type
string
required
"subprocess" or "docker"
command
string
required
Plugin binary path (subprocess) or Docker image (docker)
args
string[]
—
Command-line arguments
env
map
—
Environment variable overrides
capabilities
string[]
required
["tools"]
memory_limit
string
—
Docker container memory limit (e.g., "256m")
cpu_limit
string
—
Docker container CPU limit (e.g., "0.5")
network
string
"none"
Docker network mode ("none", "bridge", etc.)
volumes
string[]
—
Docker bind mounts
Subprocess plugins run as child processes with direct MCP stdio. Docker plugins run in hardened containers with --cap-drop ALL, --read-only, --security-opt no-new-privileges, and --network none by default.
[security]
Key
Type
Default
Description
trusted_keys
string[]
—
Paths to PEM-encoded Ed25519 public key files
allow_unsigned
bool
true
Allow unsigned subprocess plugin binaries
When allow_unsigned = false, all subprocess plugin binaries must have a valid Ed25519 signature from one of the trusted keys.
[kv]
Key
Type
Default
Description
max_keys_per_agent
int
1000
Maximum keys per agent
max_value_bytes
int
65536
Maximum value size in bytes (64 KB)
cleanup_interval
string
"1h"
Background cleanup interval for expired keys
Per-agent key-value storage with optional TTL. Exposed as Config MCP tools (kv_get, kv_set, kv_delete, kv_list, kv_set_nx). Useful for locks, counters, caches, and cross-session coordination.
[sandbox]
Key
Type
Default
Description
runtime
string
"docker"
Sandbox backend: "docker" or "kubernetes"
Selects the runtime backend for sandboxed (Docker-type) plugins.
[sandbox.kubernetes]
Key
Type
Default
Description
namespace
string
"denkeeper-sandboxes"
Kubernetes namespace for sandbox Pods
kubeconfig
string
—
Path to kubeconfig file (empty uses in-cluster config)
runtime_class
string
—
RuntimeClassName for gVisor or Kata Containers
The Kubernetes backend creates ephemeral Pods with init-container network isolation, dropped capabilities, read-only root filesystem, and Pod Security Admission labels. Supports both in-cluster (ServiceAccount) and out-of-cluster (kubeconfig) authentication.
[mcp]
Global settings that apply to all MCP tool servers.
Key
Type
Default
Description
request_timeout_secs
int
30
Per-request timeout for MCP calls (0 = no timeout)
auto_restart
bool
true
Automatically restart crashed stdio servers
max_restart_attempts
int
3
Consecutive failures before disabling a server
restart_cooldown
string
"5m"
Duration a server must stay connected to reset the failure counter
url_allowlist
string[]
—
Allowed hostnames/wildcards for SSE tool server URLs (empty = all non-blocked hosts)
[tools.*]
Key
Type
Default
Description
transport
string
"stdio"
Transport type: "stdio" (subprocess) or "sse" (remote HTTP/SSE)
Per-server timeout override (0 = use global [mcp] value)
auth
string
""
Authentication method: "" (none) or "oauth" (OAuth 2.1, SSE only)
client_id
string
—
OAuth2 client ID (optional; some servers use dynamic registration)
client_secret
string
—
OAuth2 client secret (optional; must be set together with client_id)
scopes
string[]
—
OAuth2 scopes to request (optional)
SSE security: SSRF protection blocks localhost, link-local (169.254.x.x), and cloud metadata endpoints. ${NAME} placeholders in url and headers are resolved from environment but secrets matching DENKEEPER_*_SECRET, DENKEEPER_*_PASSWORD*, and related patterns are denied. URL and header values are redacted in API responses.
Tools can also be added and removed at runtime via the REST API (tools:write scope) or the Config MCP server (tool_add/tool_remove). Runtime changes are persisted to the TOML config file.
[otel]
Key
Type
Default
Description
enabled
bool
false
Enable OpenTelemetry instrumentation
traces_endpoint
string
—
OTLP HTTP endpoint for trace export (e.g. "http://localhost:4318")