Eight field notes from a real repository. The receipts behind the Mech Suit Methodology.
The Mech Suit Methodology ended on a promise: when every step of your workflow produces a deterministic, stateful artifact, you can point an agent at the repository and ask it to tell you about the project. The repo can tell you about itself.
This is the evidence series for that promise. Each post takes one principle from the parent article and walks it into a specific file from a real project: Precheck, the constraint-aware planning engine I've been building on the Mech Suit workflow. These notes are from the first 3 rebuilds of my now 4th iteration.. Eleven ADRs, three agent-written retrospectives, twenty-one implementation plans, a seventeen-document system foundation, and a database of eighty-four archived runs.
Every post quotes the artifacts verbatim. The repo itself is private, but the receipts are here, in full, with file-path captions. Read them in order or jump to the one that matches the problem you're chewing on.
An agent can reconstruct the reasoning behind every decision in Precheck from the files alone. This is what it reads, and why that is the whole point of Phase 6.
Single-concern workers aren't about merge conflicts. They're about producing self-contained outputs that the next session — or the next agent — can pick up cold. The ADR is the receipt.
A five-phase linear progress bar was designed, stress-tested, and killed before a line of builder code was written. This is the Mech Suit principle of "expensive refine, cheap execute" doing its actual job.
Guardrails never caused a feature to be bounced. Every feature passed on the first try. Every epic got exactly five stories. The assess-and-retry path was dead code. Here is the five-cause diagnosis that names my own prior mistakes.
The Python-to-.NET port was not a language migration. It was a change in the trust model — from disk-as-state to database-as-truth. The only reason I recognized the abstraction was because I'd built the wrong version first.
Most planning tools treat replay as a nice-to-have. Precheck inverts it: replayability is a constraint on every other decision. Here is what that costs, what it enables, and why it is the reason the repo can tell you about itself.
When you're running three to five parallel builder pipelines, you can't afford to remember how to start each project. The runbook is the developer experience. Here is how Precheck's command surface was designed around operator cognitive load.
Binary "learning on/off" toggles hide improvement. A four-state lifecycle — new, active, decaying, inactive — makes "we fixed this" visible. This is the sharpest demonstration in the repo that you are still the pilot.
The whole series is designed to be downloaded as a set. Save the folder, open index.html offline, walk through in order. No external dependencies, no tracking, no build step.