All Insights
5 min read

When You Can't Measure First: What the Military Taught Me About Risk Leadership

Risk frameworks assume you have time to measure before you act. Military operations, and real adversarial environments, prove that assumption wrong.

risk managementleadershipcybersecurity
JW

Jason Walker

State CISO, Florida

I was a young Marine when I first understood what it felt like to make a consequential decision with incomplete information. Not in a classroom exercise. In the field, where the conditions were shifting, the communication was partial, the stakes were real, and nobody was going to pause the situation while I built a better model.

Years later, managing cybersecurity operations across dozens of state agencies and writing doctoral research on FAIR risk quantification, I kept running into the same quiet problem: every formal model of risk I had studied assumed, at some level, that you get to measure before you act.

That assumption is more dangerous than most practitioners realize.

The Measurement Trap

The canonical definitions of risk, from ISO, from NIST, from the actuarial tradition, converge on a formula: likelihood times impact. Measure the probability. Measure the consequence. Prioritize. Act.

It is a good model. For the right environment.

The problem is that the model smuggles in a hidden premise: that your data is adequate, that your threat environment is stable enough to sample, and that time is available for analysis before commitment. In a well-instrumented enterprise with mature telemetry, those conditions hold often enough that the model earns its credibility.

But two environments consistently violate those conditions. The first is genuine adversarial pressure, the kind where a threat actor is actively changing behavior faster than your detection pipeline can characterize it. The second is operational crisis, where the variables are human, logistical, and environmental, and where the feedback loop between action and outcome is slow, noisy, and irreversible.

Both environments punish you for waiting to measure.

What Military Service Surfaces

In military aviation, you learn quickly that the conditions almost never match the training model. Weather changes. Equipment behaves outside parameters. Communication degrades. You are handed ranges, not numbers. The data you have is incomplete, lagging, and sometimes contradictory, and it stays that way. It does not improve with patience. It changes character.

What I learned is that irreducible uncertainty is not the same as unmeasured uncertainty. These are different problems, and conflating them leads to the wrong response.

Unmeasured uncertainty means you do not have the data yet. The solution is better instrumentation, faster pipelines, more rigorous methodology. Given enough time and resources, you can reduce the uncertainty and then decide.

Irreducible uncertainty means the system itself resists stable characterization. The solution is not better measurement. The solution is a decision-making posture that can operate without complete data, and a leader who can hold that posture without flinching.

The Adversarial Environment Problem

Bring this back to cybersecurity and the parallel is exact.

A threat actor conducting a multi-stage intrusion is not going to pause while your SOC completes a formal risk assessment. The attack surface across dozens of agencies, over a hundred thousand employees, and hundreds of thousands of devices does not hold still. The threat landscape shifts faster than quarterly risk reviews can capture. When we respond to an active incident, we are not operating in measurement-first mode. We are making sequential decisions with incomplete information, each one constraining the next, all of them consequential.

The FAIR model I research is genuinely useful. Quantified risk enables better board communication, better budget justification, better prioritization of controls. I am not arguing against measurement. I am arguing against measurement as a prerequisite for action in every context.

The practitioners I have seen struggle most in crisis are the ones who were trained exclusively in measurement-first thinking. When the model breaks down, they freeze. They go looking for more data. They wait for the uncertainty to resolve. And the adversary, or the cascading failure, or the operational emergency, moves faster than they do.

What Leadership Actually Requires

The frameworks get the mechanics right but they underspecify the human requirement. Managing risk well, at the enterprise level and in the field, requires three things that no quantitative model provides.

First: a clear-eyed sense of what is actually irreducible versus what is just unmeasured. You have to know the difference, because the response is different. If you can get better data in time to matter, go get it. If you cannot, stop waiting and decide.

Second: calibrated confidence under incomplete information. This is a trainable skill, not a personality trait. It means being able to say "given what I know right now, this is the right call" without pretending you know more than you do and without being paralyzed by what you do not know.

Third: the willingness to be wrong and correct fast. In adversarial and operational environments, no single decision is final. The advantage goes to whoever can iterate faster. Our incident response playbooks change after every major exercise. Flight procedures get updated after every debrief. The ability to update without ego is not a soft skill. It is an operational capability.

The Practitioner Takeaway

If you manage risk in a high-stakes environment, build your measurement capability. Learn FAIR. Build your control frameworks. Instrument your environment. All of that matters.

And then build the other capability: the ability to lead without a complete model.

Identify the decisions in your environment that will not wait for better data. Prepare your team to make those calls with intellectual honesty about what they know and what they do not. Create after-action processes that treat fast iteration as a win, not an admission of failure.

Risk frameworks are tools. The leadership posture that knows when to use the tool and when to set it down and act anyway is the harder thing, and the more important one.

The Marines taught me that before any framework did. Not because the Corps had better models. Because when the model ran out, you still had to lead.