Skip to main content
Loom resolves configuration from multiple sources (highest priority wins):
  1. Environment variables (API_KEY, MODEL, BASE_URL)
  2. YAML config file (configs/loom.yaml, created by loom init)
  3. Dataclass defaults

Config File

Generated by loom init:
llm:
  api_key: ""                            # or use API_KEY env var
  model: "gpt-5.4"                       # any OpenAI-compatible model
  base_url: "https://api.openai.com/v1"  # endpoint URL
  request_delay: 0.5

agent:
  max_iterations: 5
  temperature: 0.0
  max_tokens: 3000
  disable_thinking: true
  build_every_n_turns: 3               # auto-update schema every N rounds (0 = disabled)
  listen_mode: false                    # enable via CLI --listen or set to true

chatbot:
  max_tokens: 4096
  context_rounds: 10                    # recent chat rounds as context (0 = single-turn)

persistence:
  session_dir: "./sessions"
  schemas_dir: "./schemas"              # standalone schema files for cross-session sharing

schema:
  max_depth: 5                          # hard limit on schema tree depth (-1 = unlimited, recommended ≤ 5)
  inspect_max_depth: 3                  # depth for schema overview in prompts
  inspect_show_values: false            # include values in overview (costs tokens)

templates:
  templates_dir: "./templates"           # user-defined template JSON files
  custom_templates_dir: "./templates/custom"  # user custom templates (git-ignored)
  schema_template: ""                    # auto-load on startup (e.g. "general")

helpers:
  enable_forget_handling: false          # strip forgotten phrases from recalled data
  enable_sensitive_handling: false        # add PII warning section to chatbot prompt

server:
  host: "0.0.0.0"
  port: 8000

Config File Resolution

PriorityPathDescription
1Explicit path (-c / from_file("..."))User-specified
2configs/loom.yamlUser config (created by loom init)
3configs/loom.default.yamlDefault template (tracked by git)

Key Settings

Config KeyDescriptionDefault
chatbot.context_roundsNumber of recent chat rounds included as conversation context. When exceeded, schema memory is auto-injected. Set to 0 for single-turn.10
agent.build_every_n_turnsAuto-update schema every N conversation rounds. Applies to chat(), listen(), and OpenClaw plugin. Set to 0 to disable.3
agent.listen_modeUse listen mode (periodic build + every-turn recall). Enable via CLI --listen.false
persistence.schemas_dirDirectory for standalone schema files shared across sessions. Backups are stored in backups/ subdirectory../schemas
schema.max_depthHard limit on schema tree depth. Paths deeper than this are rejected by create_schema_field. -1 = unlimited; recommended ≤ 5 to keep token costs low.5
schema.inspect_max_depthDepth for schema overview in CM prompts.3
helpers.enable_forget_handlingStrip forgotten phrases from recalled data in chat().false
helpers.enable_sensitive_handlingAdd PII warning section to the chatbot prompt.false

Supported LLM Providers

Any OpenAI-compatible API works:
Providerbase_urlmodel example
OpenAIhttps://api.openai.com/v1gpt-5.4
OpenRouterhttps://openrouter.ai/api/v1google/gemini-3.1-flash-lite-preview
vLLMhttp://localhost:8000/v1meta-llama/Llama-3-8b

Loading Config in Python

from loom import LoomConfig

config = LoomConfig.from_file()                      # auto-resolve + env vars
config = LoomConfig.from_file("configs/loom.yaml")   # explicit path
config = LoomConfig.from_env()                        # env vars only
config = LoomConfig(model="gpt-5.4")                  # direct construction