All insights

Risk Quantification

Comfort Equals Depth Deficit

I declared a high-stakes briefing 'done' four times. Each push for another pass found a new load-bearing error. The fix isn't more passes; it's different lenses.

Jason Walker

.5 min read

I asked my AI to do a quality pass on a long, high-stakes briefing last week. It came back clean. I pushed for another pass. It found something I had missed. Done.

I pushed for another pass anyway. It found something else. Bigger this time. Done.

I pushed again. Pass three found three errors against the actual text of a statute we had quoted six different times. Pass four found a peer-state status that had quietly gone stale because a budget item had moved without my asking the right question. Pass five found a multi-million-dollar reconciliation gap, and the only reason it surfaced was that I switched tools and let a different scraper pull the source PDF.

Five passes. Four "done"s. Each push surfaced something load-bearing.

That isn't a pattern about my AI being lazy. It's a pattern about completion bars being calibrated to the wrong threshold of stakes.

Comfort is the signal

The phrase I keep coming back to is from my own rigor doctrine: comfort equals depth deficit. If the analysis feels easy, find the premise you skipped. I wrote that rule for documents under review. The lesson last week was that the rule applies just as hard to the reviewer reviewing themselves.

"Done" is a feeling. The feeling means: nothing in my current frame is throwing an alarm. That is genuinely useful information about your current frame. It is not information about reality.

Reviewers, AI assistants, and authors all have a default cognitive frame. They check for the kind of error that frame is built to catch. Phantom citations. Tone problems. Stale links. Internal logical drift. When the current frame returns clean, the system feels finished. The brain delivers confidence. The hand reaches for the publish button.

But the frame doesn't catch what the frame isn't watching for. If the frame is language and tone, structural drift slips through. If the frame is structure, primary-source mismatches slip through. If the frame is primary sources, external context drift slips through. Each lens has a blind spot, and each blind spot is exactly where the next class of error lives.

If you only ever run the same lens, the second pass feels redundant because it is. You ran the same lens twice. You aren't getting better coverage; you're getting the same coverage at higher cost. That feels like done because nothing new shows up.

Done is the absence of new findings under the current lens. Not the absence of errors.

The pass sequence matters

What unlocked the document last week wasn't more passes. It was different passes.

Pass one was a broad fan-out: a generic quality review. Standard issues, standard fixes.

Pass two was verification. Walk every numerical claim back to the source it cites. This caught phantom citations behind several quantitative statements. Different lens, different bug class.

Pass three was primary source. Read the actual statute, the actual budget line, the actual proviso text. Don't trust what we wrote about them. This caught three errors against a statute I had quoted correctly five other times in the same document.

Pass four was external close-out. What has changed in the world since this section was drafted? Has a peer state passed legislation that moved their model? Has a funding line shifted? This caught a comparator whose status was a year out of date because a legislative session had moved policy I hadn't re-checked.

Pass five was tooling escalation. Try a different scraper, a different fetch path, a different way of pulling the source. The tool I had been using couldn't parse a PDF I needed. A different tool could, and the different tool surfaced a reconciliation gap nobody had found in any prior pass.

Five passes, five distinct lenses. Each one caught a class of error the prior one structurally couldn't see.

What this means for the workflow

The takeaway isn't "always do five passes." It's "make the lens sequence explicit, and pick lenses that cover orthogonal failure modes."

For a document going to a hostile peer-review setting, like legislative testimony or board materials a competitor would love to use against you, that probably means at least four lenses: internal consistency, primary-source verification, external context refresh, and a tooling or method change to surface what your current method blinds you to. Three of those will feel productive. The fourth will feel like overkill until it isn't.

When the AI says "I think this is ready," that is one lens reporting clean. It is not a finding about reality. The fix isn't to argue with the AI. The fix is to ask: what lens hasn't run yet?

Comfort is the signal that you've reached the limit of your current lens. Not the limit of the work. The limit of the lens. If the stakes justify another lens, run it.

Done is a function of how many distinct lenses have run, not of how confident any one lens feels.

Keep reading

Weekly writing from inside the work.

Practitioner-researcher essays four times a week. No spam, unsubscribe in one click.

Subscribe

Weekly writing from inside the work.

Field observations and framework critiques from a practitioner-researcher running cybersecurity at scale. AI in operations, FAIR risk research, and the leadership patterns that hold both together. No spam. Unsubscribe in one click.