All Insights
5 min read

When You Can't Measure Before You Move

Risk frameworks assume you have time to measure before you act. Operational experience taught me that assumption breaks exactly when leadership matters most.

risk managementleadershipcybersecurity
JW

Jason Walker

State CISO, Florida

I learned in the Marine Corps that the most consequential decisions get made before the picture is clear. You are standing in front of a situation that is moving faster than your information, and the people around you need a direction. Nobody hands you a risk register.

I have spent years studying FAIR risk quantification as a doctoral researcher and applying it operationally as a State CISO, supporting dozens of agencies and hundreds of thousands of devices. I believe in measurement. I teach it. I defend it in front of legislative committees and cybersecurity advisory councils. Quantifying risk is not just an academic exercise. It is how you move resources rationally in an environment where everyone wants everything protected and nobody wants to fund it.

But in the field, and in every high-pressure operational moment since, I have run into the quiet assumption buried inside every risk framework I have ever used: that you have time to measure before you act.

You do not always have that time. And the moment that assumption breaks, the framework becomes furniture.

What Most People Get Wrong About Risk Measurement

Risk practitioners talk about measurement as if it is a prerequisite to management. The logic is clean: quantify likelihood, quantify impact, rank your exposures, prioritize your controls. FAIR does this elegantly. NIST SP 800-30 structures it well. The math is sound when the inputs are stable.

The problem is that "stable inputs" is doing enormous hidden work in that sentence.

Enterprise risk models were designed for environments where you can observe, sample, and iterate. You gather historical loss data, calibrate your frequency estimates, build confidence intervals, and present a range to leadership. The model matures over multiple cycles. Governance catches up to operations. The organization learns.

That architecture assumes a relatively cooperative environment. Adversarial and operational environments do not cooperate.

In cybersecurity, the adversary actively degrades your measurement capacity. They operate in your blind spots by design. The most dangerous intrusions are precisely the ones your telemetry is not capturing yet. Your loss event frequency data is a record of what you have already detected, not what is happening. That gap between observed and actual is where campaigns live for months before anyone notices.

In military operations, the environment is dynamic and adversarial by definition. The intelligence picture is always partial. The terrain changes. The opponent adapts. The data you had five minutes ago is anchored to a prior state that may no longer exist.

Both environments share the same structural problem: the stakes are highest exactly when the measurement is least reliable.

The Real Work Is Judgment Under Incomplete Information

Here is what military service taught me about making decisions in uncertainty: good decisions under incomplete information are not the same as guesses. They are built from a different set of inputs.

Instead of quantified probability distributions, you are working with directional signals. Is the situation deteriorating or stabilizing? Is the pattern of small changes accelerating or holding? Does the briefing match what you are seeing on the ground?

Instead of a ranked risk register, you are building a mental model of the system you are managing. What it responds to, where it is fragile, what changes propagate and how fast.

Instead of waiting for the model to mature, you are acting on the best available interpretation and updating fast when new information arrives.

That is not a failure of risk management. That is what risk management actually looks like in live environments.

Back in the enterprise context, I see the same dynamic. The agencies I support do not always have the luxury of a mature measurement program before the threat arrives. A major incident does not pause while you finish your asset inventory. A ransomware pre-positioning campaign does not wait for you to complete your FAIR analysis. The decisions that matter most get made before the data is clean.

The CISO who waits for measurement confidence before acting is not being rigorous. They are being slow. And in genuinely adversarial environments, slow is a vulnerability.

What You Should Do Differently

This is not an argument against quantification. Measure everything you can. Build your FAIR models. Calibrate your estimates. Improve your data collection. All of that work creates real value and I still do it.

But separate the question "how do I measure risk well?" from the question "how do I decide well when measurement is incomplete?" Those are different skills. Most security programs invest heavily in the first and almost nothing in the second.

Here is how to build the second capability:

  • Develop pre-mortems, not just risk registers. Before a decision, ask: if this goes wrong in six months, what was the failure? That question surfaces the fragile assumptions your model is hiding.

  • Track your decision quality separately from your outcome quality. A good decision can produce a bad outcome. A bad decision can get lucky. If you only review outcomes, you are learning from noise.

  • Build your judgment on directional signals when you cannot have precise ones. Threat actors moving laterally in your environment give you behavioral signals before they give you forensic evidence. Train your analysts to act on the signal, not just document it for the report.

  • Accept that some risk will be managed through leadership, not models. The decisions that define your program are not the ones where the data was clean and the answer was obvious. They are the ones where someone had to call it with incomplete information, own the call, and adapt when reality pushed back.

The military taught me that before any framework did. The best leaders I served with were not the ones who waited for perfect information. They were the ones who could read a partial picture, commit to a direction, and adjust faster than the situation could outrun them.

That is the skill no framework teaches you directly. But it is the one that matters most when the environment stops cooperating.