All insights

Risk Quantification

Measuring the Multiplier: What an AI-Augmented Strategic Plan Actually Costs

Three to seven cents on the consulting dollar is the headline. The real story is what makes the multiplier work, and what makes it collapse.

Jason Walker

.5 min read

Last week a colleague asked me what the AI-augmented operating model actually costs. Not in vibes. In dollars and hours, against a defensible comparison case. I went back and measured one.

The artifact in question is a strategic plan I delivered as a state cybersecurity executive: ninety-seven thousand words, seven domain chapters, nine appendices, eight numbered versions over thirty-seven calendar days. The deliverable is the kind of strategic document that public-sector programs typically outsource to a consulting firm. The engagement runs anywhere from twelve to sixteen weeks. It involves four to eight consultants. It produces a polished bound deck and a written report at the end.

I wrote mine in-house, working with an AI-augmented operating system as the second seat. Twenty-one unique working days. Roughly one hundred focused human hours plus two hundred fifty to four hundred AI compute hours running in parallel. Total in-house cost, including a fair attribution of subscription tooling: about eight thousand seven hundred dollars.

An equivalent outsourced engagement at the rates this kind of work commands: one hundred twenty-five thousand to two hundred fifty thousand dollars.

Three to seven cents on the consulting dollar.

That ratio is the headline. The headline is also the easiest part of the story to misread. If I let you walk away with "AI is cheap," I will have wasted both of our time.

The real claim is harder. A domain expert plus an operating system can produce consultancy-grade strategic deliverables at three to seven cents on the consulting dollar, but only if the human holds the irreducible inputs the machine cannot supply. The moment you remove those inputs, the multiplier collapses, and the artifact reverts to something closer to its consulting-equivalent cost or worse.

The irreducible inputs are four, and they are not negotiable.

The first is voice. Every audience-facing deliverable has a thesis sentence the writer either earns or fakes. In the document I produced, that sentence is one I have said out loud to other people for a year. The machine did not generate it. It picked it up, repeated it, and held it consistent across one hundred thousand words. That is the right division of labor.

The second is judgment. Strategic documents live or die on a thousand small calls. Which framing wins the legislator in the room. Which figure to defend, which to soften. Which agency to name first in a priority list, which to hold for later. Consultancies make these calls collaboratively, often with internal review committees. An AI-augmented in-house build makes them faster, but the calls themselves still belong to a human with skin in the game and the relationships to back them up. The machine can offer options. It cannot choose.

The third is relationships. Half the work in a strategic document is built on what you know about the people who will read it. The reporting chain. The political tells. The communication styles. The history of which programs the audience trusts and which they do not. None of this is research, in the formal sense. It is lived. It walks into the office every morning with the executive who is signing the deliverable. Outsourced engagements rebuild this context every time, and they rebuild it imperfectly, because the people doing the rebuilding rotate.

The fourth is the discipline to verify rather than propagate. AI-augmented drafting is fast. AI-augmented drafting is also wrong often enough that any document built without a verification gate will eventually ship a fabricated citation, a misattributed quote, or a confidently stated number that does not survive a primary-source check. The five-pass quality cycle I ran on the strategic plan, including primary-source verification through a dedicated tool and a transparent log of every open verification item, is not optional. It is the price of using the multiplier honestly.

Take any of these four away and the cost ratio breaks. A junior practitioner using the same tooling without the voice, judgment, relationships, and verification discipline will produce a document that looks like a strategic plan and reads like one until a peer reviewer reaches for the citations. A senior consultant using the same tooling without the in-house context will produce a document that does not land because the framing was researched, not lived. Neither failure mode is the tool's fault.

The honest takeaway, then, is not "outsource less." It is a more useful question to ask of your own program. Where in your strategic-document production are you currently paying for capacity, and where are you paying for irreplaceable judgment? If you are paying a consultancy for capacity to write words at scale, an AI-augmented in-house build is a defensible substitution. If you are paying a consultancy for vendor-neutral framing, external benchmarking, or board-level political cover, you are buying something the in-house model does not produce, and substituting on cost alone will leave you worse off.

The strategic plan I built will be tested by something more useful than a cost ratio. It will be tested by durability. If the document continues to function as the command reference for every downstream artifact the program produces over the next two quarters, the three-to-seven percent claim earns its keep. If the program drifts back to ad-hoc production within ninety days, the artifact was expensive after all, and the multiplier was a one-time illusion.

The metric to watch is whether the next weekly briefing, the next executive update, the next external derivative pulls cleanly from the master file or starts from a blank page. The artifact has to do work, repeatedly, without supervision, for the cost claim to hold.

I will know in October.

Keep reading

Weekly writing from inside the work.

Practitioner-researcher essays four times a week. No spam, unsubscribe in one click.

Subscribe

Weekly writing from inside the work.

Field observations and framework critiques from a practitioner-researcher running cybersecurity at scale. AI in operations, FAIR risk research, and the leadership patterns that hold both together. No spam. Unsubscribe in one click.