The Rigor Audit: When Inherited Process Overshoots The Audience
Process rigor gets built for a specific audience. When the audience changes and the rigor stays, you pay the coordination cost twice for benefits you no longer need.
Jason Walker
State CISO, Florida
Here is a question that unlocked three weeks of coordination work in an afternoon: what are the reviewers actually for?
The project had a well-defined review protocol. Two independent reviewers working blind. A third reviewer to adjudicate disagreements. Cohen's kappa computed on a random 10 percent of the output, with a 0.70 minimum acceptance threshold before any edge could ship. It looked rigorous because it was rigorous. The protocol document ran 365 lines. The kappa computation script ran 14 unit tests.
None of it was load-bearing for the audience that actually consumed the work.
The fence was built for a reason
The protocol had been designed for an academic methods committee. Dissertation-grade instrument reliability is a specific kind of rigor. Peer-reviewed journals expect it. Methods chairs require it. The dual-reviewer blinded design with kappa threshold is how you defend the construct validity of your mapping instrument when your reader is going to stress-test every inference.
That audience never materialized. The project got repositioned to serve a different consumer: an internal audit defensibility bar. Not an academic committee. An external state auditor, a regulatory reviewer, a sponsor auditor spot-checking the work trail. That audience has its own bar, and it is real, but it is not the same bar. It accepts a pinned source snapshot with a cryptographic hash, a three-sentence rationale per row, a named reviewer, and a clean-room derivation trail. No kappa required. No blinded pairs.
Three weeks of reviewer recruitment. Coordination with twelve potential reviewers. Twelve seats at eight to twelve hours each. Call it 150 person-hours of coordination overhead. All of it aimed at satisfying an audience that was no longer the consumer.
The reverse-Chesterton problem
Chesterton's fence says: do not remove the fence until you understand why it was built. That rule is good. It prevents the overconfident from ripping out guardrails they do not understand.
There is a second version of this problem that does not get as much attention. When the reason for the fence genuinely changes, the fence often stays by default. Not because anyone still believes in it. Because removing it feels like betraying the original rigor. It feels lazy. It feels like lowering the bar.
It is not lazy. It is the disciplined move. Paying for rigor that no longer serves an audience is not a virtue. It is momentum masquerading as quality.
The rigor audit is a specific habit: when the audience for your work shifts, stop and catalog what rigor you are carrying, what audience each piece was designed for, and whether the current audience still needs it. Price the coordination cost. Compare against the replacement cost of the audience-appropriate rigor. Remove what no longer fits.
How to run the audit
Three checks, in order.
First, identify the audience shift. Be explicit. Write down who the original consumer was, who the current consumer is, and what specifically changed. "The project used to serve X. It now serves Y. X's bar was A. Y's bar is B." If you cannot state the shift in one paragraph, you are not ready to audit the rigor.
Second, inventory the rigor by origin. For each review step, approval gate, documentation requirement, and methodology constraint in your current process, ask: who was this designed to satisfy? Some were designed for X. Some were designed for Y. Some were designed for Z and neither audience ever cared. The ones that fit the current consumer stay. The ones that fit a retired audience go. The ones that never fit anyone get removed regardless.
Third, price the removal honestly. Removing rigor is not free. You are giving up a specific form of defensibility. Is the replacement form adequate? If yes, remove the old rigor and keep the replacement. If no, keep the old rigor and note the rationale so the next person does not rediscover this problem in a year. The rigor audit is not an excuse to strip process. It is a forcing function to align process with consumer.
The honest answer
Three weeks of coordination work evaporated in the afternoon I actually asked the question. The remaining delivery shipped via a lighter-weight Tier 2 process. Single reviewer. Pinned sources. Cryptographic hashes. Three-sentence rationales. Validator clean on every row.
The review protocol document is still there. The kappa script still runs. If the audience ever shifts back to an academic consumer, the rigor is available. For now it sits on a shelf where it belongs, and the project ships.
The single hardest sentence in process work is: the audience changed, and the rigor we built no longer fits. The people who built the rigor did good work. The rigor itself was sound. It just fit a different consumer. Saying so out loud is how you stop paying the coordination bill twice.
Run the rigor audit. The first question is always the same. What are the reviewers actually for?