- Environment variables (
API_KEY,MODEL,BASE_URL) - YAML config file (
configs/loom.yaml, created byloom init) - Dataclass defaults
Config File
Generated byloom init:
Config File Resolution
| Priority | Path | Description |
|---|---|---|
| 1 | Explicit path (-c / from_file("...")) | User-specified |
| 2 | configs/loom.yaml | User config (created by loom init) |
| 3 | configs/loom.default.yaml | Default template (tracked by git) |
Key Settings
| Config Key | Description | Default |
|---|---|---|
chatbot.context_rounds | Number of recent chat rounds included as conversation context. When exceeded, schema memory is auto-injected. Set to 0 for single-turn. | 10 |
agent.build_every_n_turns | Auto-update schema every N conversation rounds. Applies to chat(), listen(), and OpenClaw plugin. Set to 0 to disable. | 3 |
agent.listen_mode | Use listen mode (periodic build + every-turn recall). Enable via CLI --listen. | false |
persistence.schemas_dir | Directory for standalone schema files shared across sessions. Backups are stored in backups/ subdirectory. | ./schemas |
schema.max_depth | Hard limit on schema tree depth. Paths deeper than this are rejected by create_schema_field. -1 = unlimited; recommended ≤ 5 to keep token costs low. | 5 |
schema.inspect_max_depth | Depth for schema overview in CM prompts. | 3 |
helpers.enable_forget_handling | Strip forgotten phrases from recalled data in chat(). | false |
helpers.enable_sensitive_handling | Add PII warning section to the chatbot prompt. | false |
Supported LLM Providers
Any OpenAI-compatible API works:| Provider | base_url | model example |
|---|---|---|
| OpenAI | https://api.openai.com/v1 | gpt-5.4 |
| OpenRouter | https://openrouter.ai/api/v1 | google/gemini-3.1-flash-lite-preview |
| vLLM | http://localhost:8000/v1 | meta-llama/Llama-3-8b |