Infrastructure atlas — auto-generated documentation
Context
A home cluster grows by accretion: a service added here, a sensor added there, an inter-container dependency added later. After a few months, it becomes impossible to mentally redraw what talks to what, what depends on what, what is still alive. And handwritten documentation — you start it, then you forget it.
Constraint
Three requirements:
- Never have to maintain the doc by hand. A doc that relies on human discipline goes stale within weeks.
- Never publish a secret.
.envfiles, absolute paths, tokens — stripped on the way out, not just kept out of the input. - Be able to read the doc as text (Markdown, terminal) as well as a graph.
Decision
A cron scanner deployed on each node (Pi4 and Pi5). It reads
the docker-compose.yml files found under an
agreed root, aggregates services / images / networks /
volumes / healthchecks, and pushes a HostName.json
to the central node.
On the central node, a consolidator merges the files, detects cross-references (a Pi4 service talking to a Pi5 service), and a renderer produces three outputs from the same data:
- A long Markdown with one section per node, per stack, per service.
- A compact HTML for quick reading.
- A Mermaid diagram (
flowchart TB) showing the whole cluster on one page.
Cron runs at 5 a.m. The report is ready by the time I wake up.
Measurement
First run: 38 healthchecks detected, 36 OK. The two that were flapping got fixed in the next hour — they were invisible to the naked eye until then. Since then, documentation is always up to date, by construction.
What remains
Documentation stops being a permanent backlog. When a question
comes up ("is music-intel still running on Pi4?"),
the answer is in this morning's report. And the
Infrastructure page you can read on
this site pulls from those same exports — public doc and
internal doc no longer drift apart.