Skip to content

instance.yaml Reference

One YAML to rule them all — a single configuration file radiating control to every service in the constellation

The central configuration file for a Sanctum instance lives at ~/.sanctum/instance.yaml. Every instance-specific value is defined here — services, networking, paths, family members, node topology, and secrets references.

One file. Everything your house knows about itself is in this file. The name, the network layout, which services run, who lives there. If this file is wrong, everything downstream is wrong. If this file is right, everything downstream has a fighting chance.

A JSON cache is auto-regenerated at ~/.sanctum/.instance.json whenever the YAML changes, via lib/yaml2json.py.

The absolute minimum to get Sanctum to acknowledge your existence:

instance:
slug: manoir-nepveu
name: Manoir Nepveu
timezone: America/Montreal
users:
mac: bert
vm: ubuntu
network:
vm_ip: 10.10.10.10
mac_bridge_ip: 10.10.10.1
bridge_interface: bridge100
vm_ssh_alias: openclaw
lan_ip: 192.168.1.10

Top-level identity for this Sanctum deployment. Who you are. Where you are. When you are.

KeyTypeRequiredDescription
slugstringYesURL-safe identifier used in hostnames, paths, and DNS. Example: manoir-nepveu
namestringYesHuman-readable display name. Example: Manoir Nepveu
timezonestringYesIANA timezone for scheduling and logs. Example: America/Montreal
instance:
slug: manoir-nepveu
name: Manoir Nepveu
timezone: America/Montreal

OS-level usernames on the Mac host and the VM. Unglamorous but load-bearing — every SSH command, every file path, every LaunchAgent plist has one of these baked in. Get them wrong and nothing errors cleanly; things just silently don’t work, which is worse.

KeyTypeRequiredDescription
macstringYesmacOS username on the host machine
vmstringYesLinux username inside the VM
users:
mac: bert
vm: ubuntu

The plumbing. Nobody admires plumbing until it breaks, and then it’s the only thing anyone can talk about. This section defines how the Mac, the VM, and the LAN find each other — a tiny private internet inside your house, inside your actual internet.

KeyTypeRequiredDescription
vm_ipstringYesStatic IP of the VM on the host-only network
mac_bridge_ipstringYesMac-side IP on the bridge interface
bridge_interfacestringYesmacOS bridge interface name (e.g., bridge100)
vm_ssh_aliasstringYesSSH config alias for the VM (e.g., openclaw)
lan_ipstringYesMac Mini IP on the local network
network:
vm_ip: 10.10.10.10
mac_bridge_ip: 10.10.10.1
bridge_interface: bridge100
vm_ssh_alias: openclaw
lan_ip: 192.168.1.10

Where things live on disk. Every backup script, every log rotation, every skill loader reads from this section. Think of it as the house’s filing cabinet — except the filing cabinet is also load-bearing, so don’t move it.

KeyTypeRequiredDescription
openclaw_configstringYesAgent config directory. Default: ~/.openclaw
openclaw_skillsstringYesShared skills repo checkout
logsstringYesCentralized log directory
projectsstringYesProjects root (Mac side)
backupsstringYesBackup destination directory
paths:
openclaw_config: /Users/bert/.openclaw
openclaw_skills: /Users/bert/Projects/openclaw-skills
logs: /Users/bert/.sanctum/logs
projects: /Users/bert/Projects
backups: /Users/bert/.sanctum/backups

Here’s where it gets real. Every service your house runs — the agent gateway, the voice engine, the home automation hub, the offline library — each one gets an enabled boolean and, if it listens on a port, a port key. Flip the boolean, regenerate plists, and the service appears or vanishes. Configuration as incantation.

The enabled flag controls whether generate-plists.sh renders and loads the corresponding LaunchAgent. A disabled service isn’t just stopped — it’s unloaded. It doesn’t exist until you say it does.

ServiceKeyDefault Port(s)Description
Gatewaygateway1977OpenClaw/DenchClaw agent gateway
Home Assistanthome_assistant8123Home automation hub (Docker)
Dashboarddashboard1111, 1111Command center web UI
Firewalla Bridgefirewalla1984Firewalla P2P bridge
VMvmUTM virtual machine
Voice Agentvoice_agentYoda voice interaction agent
XTTSxttsText-to-speech server
MLX Servermlx_serverCouncil MLX model server
Cloudflare TunnelcloudflareCloudflare Zero Trust tunnel
iCloud Filericloud_filerAutomatic iCloud filing daemon
Health Centerhealth_centerHealth monitoring dashboard
TailscaletailscaleMesh VPN
LM Studiolmstudio1234Local LLM inference server
WatchdogwatchdogService health monitoring
Kiwixkiwix8888Offline library server
Signal Bridgesignal_bridgeSignal messaging bridge
Sanctum Proxyproxy4040LLM routing proxy

Seventeen services. On a Mac Mini. Under a desk. In Quebec. Running a household.

services:
gateway:
enabled: true
port: 1977
home_assistant:
enabled: true
port: 8123
dashboard:
enabled: true
port: 1111
dev_port: 1111
firewalla:
enabled: true
port: 1984
vm:
enabled: true
voice_agent:
enabled: true
xtts:
enabled: true
mlx_server:
enabled: true
cloudflare:
enabled: true
icloud_filer:
enabled: true
health_center:
enabled: true
tailscale:
enabled: true
lmstudio:
enabled: true
port: 1234
watchdog:
enabled: true
kiwix:
enabled: true
port: 8888
signal_bridge:
enabled: false
proxy:
enabled: true
port: 4040

From the shell library:

Terminal window
source ~/.sanctum/lib/config.sh
if sanctum_enabled services.gateway; then
echo "Gateway is enabled on port $(sanctum_get services.gateway.port)"
fi

From TypeScript:

import { isEnabled, get } from './lib/config';
if (isEnabled('services.gateway')) {
const port = get('services.gateway.port');
}

The only section of this file that’s about what isn’t here. Sanctum never stores secrets in instance.yaml directly — the config tells you where secrets live, not what they are. A treasure map that says “the chest is buried under the oak tree” without ever containing the treasure. Three secret stores, three levels of paranoia, all of them justified.

KeyTypeDescription
keychain_accountstringmacOS Keychain account name for stored tokens
onepassword_vaultstring1Password vault name for credentials
sops_filestringPath to SOPS-encrypted secrets file on the VM
secrets:
keychain_account: sanctum
onepassword_vault: Sanctum
sops_file: /home/ubuntu/.openclaw/secrets.enc.yaml

Where the smart home meets reality. This section doesn’t configure Home Assistant itself — that’s configuration.yaml’s job. This section tells Sanctum what HA needs to know about the physical world: which speakers exist, which cameras are watching, which thermostat controls which zone. The kind of inventory you never think to write down until the third time you troubleshoot from memory.

KeyTypeDescription
sonos_speakerslistKnown Sonos speaker IPs (required for bridge networking)
cameraslistCamera integration entries
hvacmapHVAC zone definitions
home_assistant:
sonos_speakers:
- 192.168.1.101
- 192.168.1.102
- 192.168.1.103
- 192.168.1.104
- 192.168.1.105
- 192.168.1.106
- 192.168.1.107
- 192.168.1.108
- 192.168.1.109
- 192.168.1.110
cameras:
- name: front_door
type: blink
hvac:
main_floor:
type: ecobee

Your house should know who lives in it. This section defines family members for agent personalization and access control — who gets greeted by name, who can ask the agents for things, who the system considers a stranger. It’s a short list with outsized consequences.

KeyTypeDescription
memberslistList of family member objects
members[].namestringDisplay name
members[].rolestringRole within the household
family:
members:
- name: Bertrand
role: admin
- name: Partner
role: member

Multi-site node topology. Each node represents a physical location running Sanctum infrastructure. Because one house wasn’t enough — you needed a distributed system. The hub runs everything; satellites run what they can; mobile devices check in when they feel like it. It’s federation for people who own property in more than one postal code.

KeyTypeDescription
<node_id>mapNode identifier (e.g., manoir, chalet)
.typestringNode type: hub, satellite, mobile, or sensor
.hoststringLAN hostname or IP
.tailscale_ipstringTailscale mesh IP
.tailscale_namestringTailscale device name
.userstringSSH username on this node
.servicesmapPer-node service overrides (enabled flags)
nodes:
manoir:
type: hub
host: 192.168.1.10
tailscale_ip: 100.112.178.25
tailscale_name: berts-mac-mini-m4-pro
user: bert
services:
gateway:
enabled: true
home_assistant:
enabled: true
vm:
enabled: true
chalet:
type: satellite
host: chalet.local
tailscale_ip: 100.112.203.32
tailscale_name: berts-mac-mini-chalet
user: bert
services:
gateway:
enabled: true
home_assistant:
enabled: true
vm:
enabled: false
TypeDescription
hubPrimary site with full infrastructure (VM, all services)
satelliteSecondary site with reduced stack (no VM, lighter services)
mobileLaptop or portable device
sensorHeadless monitoring device

The satellite knows it’s not the hub. It doesn’t try to be. It runs what it needs, phones home over Tailscale, and keeps the lights on when the internet goes out. Self-aware infrastructure is underrated.


An annotated example is available at ~/.sanctum/instance.yaml.example and can be used as a starting point for new instances.


The YAML config is automatically converted to a flat JSON cache at ~/.sanctum/.instance.json. This cache is used by the shell and TypeScript libraries for fast key lookups. If you edit instance.yaml manually, regenerate the cache:

Terminal window
python3 ~/.sanctum/lib/yaml2json.py