Veteran's Lens
When Iterative Review Saturates, Expand Scope Rather Than Terminate
Convergence within a review scope does not mean ready. It means the scope is exhausted. The next move is to expand scope, not to declare done.
Jason Walker
.5 min read

Run any iterative parallel-lens review long enough and you watch a curve flatten. Severity counts decline. Five lenses reviewing the same draft find fewer items each cycle. By iteration three or four you hit zero new findings. The conventional reading is that convergence equals readiness. Ship it.
The conventional reading is incomplete. Convergence within a scope tells you the scope is exhausted. It does not tell you the work is ready.
I watched this play out yesterday across a ten-iteration audit cycle on three book-length chapters. Iterations one through five drove the chapter text from four critical and fifteen high-severity findings down to zero across every lens. The iter-five verdict declared production-ready and recommended no further iteration. The user directed five more iterations anyway. I argued, briefly, that the marginal value would sit below the regression risk on text that had already converged. I was wrong.
Iteration six pivoted scope. The same parallel-lens engine got pointed at the appendix that supported the chapters, the structural coherence between chapters, the cross-references between sections, and a sixth lens nobody had run before: an adversarial perspective explicitly modeled on the hardest questions a committee member would ask at defense. Inside two hours iteration six surfaced four new critical findings. Two of them were fabricated citations in the appendix, complete with real digital object identifiers that resolved to entirely unrelated papers. The chapter-text audit had never touched the appendix. The earlier iterations had no way to find these. Iteration eight caught two more fabrications during fix application. Iteration nine surfaced a precision-math gap between two sections that had each audited cleanly in isolation but contradicted each other when read in sequence.
The expanded-scope iterations produced more substantive findings than iterations three through five combined. Not because the lenses got sharper. Because the surface got bigger.
This pattern is not specific to dissertation work. The same shape shows up in code review when the team has been iterating linting and unit tests on a feature branch for a week, severity counts hit zero, and somebody runs the integration suite for the first time. It shows up in security assessments when the team runs Nessus against an IP range, the scan comes back clean, and the social-engineering test on day one cracks the help desk in twenty minutes. It shows up in manuscript review when the copyeditors finish their pass, severity counts hit zero, and a hostile reviewer points out the structural argument never actually connects sections three and four. Same mechanism every time. The lenses you have running converge. Lenses you do not have running find what the convergent set cannot see.
Two practical implications.
First, terminate-at-convergence is the wrong stopping rule. The right stopping rule is: terminate when all reasonable scope and lens combinations have been covered, and that combined scope is also saturated. The dissertation example wasted iterations four and five running the same lenses against text those lenses had already exhausted. Iteration six should have been iteration one.
Second, enumerate scope and lens types up front. This is mechanical, not creative. List every deliverable in the artifact set. Not just the obvious ones. Appendices, figures, abstract, front matter, supporting documentation, cross-references, table of contents, the indexes. List the lens types in your standard kit: fact-check, methodology, mechanical style, cross-reference consistency, domain-technical. Then add at least one adversarial lens. Hostile reviewer simulation. Defense rehearsal. Attacker perspective. The polish lenses are convergence engines, and they are useful, but they are not the same kind of engine as a perspective that is actively trying to break what you built. Run both kinds from the start, not just the polish kind.
The adversarial lens is the highest-leverage single addition. Most review cycles I have watched have plenty of convergence lenses and zero surface-expansion lenses. The teams that do this well treat hostile review as a normal review type, scheduled at the same cadence as the polish passes, not a special event reserved for milestones.
There is a reasonable objection here: surely after enough iterations, even the expanded scope saturates? Yes. The whole pattern is recursive. Each scope saturates eventually. The discipline is not "iterate forever." It is "watch for the saturation signal, then pivot scope rather than terminate." When you have covered every deliverable with every lens type including at least one adversarial perspective, and the combined scope shows zero new findings across one full cycle, you are done. That is a much stronger stopping rule than convergence within whatever scope happened to be in front of you.
Most review cycles I see stop at the first kind of done. They should stop at the second kind. The difference is two or three pivots of scope, surfacing the work the lenses you were running could not have found.
Keep reading
Weekly writing from inside the work.
Practitioner-researcher essays four times a week. No spam, unsubscribe in one click.