When You Can't Measure Before You Move
Risk frameworks assume you have time to measure before you act. Caregiving taught me that assumption breaks exactly when leadership matters most.
Jason Walker
State CISO, Florida
My mother had her second stroke on a Tuesday morning. By afternoon, I was standing in a hospital hallway making decisions about her care, her housing, her medications, and her long-term rehabilitation trajectory. I had no baseline data. The neurologist was cautious with her estimates. The discharge planner was working from a checklist designed for average patients. And the clock was running.
Nobody handed me a risk register.
I've spent years studying FAIR risk quantification as a doctoral researcher and applying it operationally as the State CISO for Florida, supporting 35 agencies and over 200,000 devices. I believe in measurement. I teach it. I defend it in front of legislative committees and cybersecurity advisory councils. Quantifying risk isn't just an academic exercise - it's how you move resources rationally in an environment where everyone wants everything protected and nobody wants to fund it.
But standing in that hallway, I ran into the quiet assumption buried inside every risk framework I had ever used: that you have time to measure before you act.
You don't always have that time. And the moment that assumption breaks, the framework becomes furniture.
What Most People Get Wrong About Risk Measurement
Risk practitioners talk about measurement as if it's a prerequisite to management. The logic is clean: quantify likelihood, quantify impact, rank your exposures, prioritize your controls. FAIR does this elegantly. NIST SP 800-30 structures it well. The math is sound when the inputs are stable.
The problem is that "stable inputs" is doing enormous hidden work in that sentence.
Enterprise risk models were designed for environments where you can observe, sample, and iterate. You gather historical loss data, calibrate your frequency estimates, build confidence intervals, and present a range to leadership. The model matures over multiple cycles. Governance catches up to operations. The organization learns.
That architecture assumes a relatively cooperative environment. Adversarial and human environments don't cooperate.
In cybersecurity, the adversary actively degrades your measurement capacity. They operate in your blind spots by design. The most dangerous intrusions are precisely the ones your telemetry isn't capturing yet. Your loss event frequency data is a record of what you've already detected - not what's happening. That gap between observed and actual is where campaigns live for months before anyone notices.
In caregiving, the environment is human and biological, which means it's nonlinear and fast. My mother's aphasia made it impossible to get reliable self-reported symptoms. Her partial paralysis changed her fall risk weekly as she progressed through therapy. The medication interactions the hospitalist listed as low probability materialized twice in four months. The data I had was always lagging, partial, and anchored to a prior state that no longer existed.
Both environments share the same structural problem: the stakes are highest exactly when the measurement is least reliable.
The Real Work Is Judgment Under Incomplete Information
Here's what I learned from managing my mother's care through two strokes, a rehabilitation facility, a transition home, and eighteen months of daily uncertainty: good decisions under incomplete information are not the same as guesses. They're built from a different set of inputs.
Instead of quantified probability distributions, you're working with directional signals. Is her speech better or worse than yesterday? Is the pattern of small declines accelerating or stable? Does the doctor's body language match the words?
Instead of a ranked risk register, you're building a mental model of the system you're managing - what it responds to, where it's fragile, what changes propagate and how fast.
Instead of waiting for the model to mature, you're acting on the best available interpretation and updating fast when new information arrives.
That's not a failure of risk management. That's what risk management actually looks like in live environments.
Back in the enterprise context, I see the same dynamic. The agencies I support don't always have the luxury of a mature measurement program before the threat arrives. A Level 4 incident doesn't pause while you finish your asset inventory. A ransomware pre-positioning campaign doesn't wait for you to complete your FAIR analysis. The decisions that matter most get made before the data is clean.
The CISO who waits for measurement confidence before acting isn't being rigorous. They're being slow. And in genuinely adversarial environments, slow is a vulnerability.
What You Should Do Differently
This isn't an argument against quantification. Measure everything you can. Build your FAIR models. Calibrate your estimates. Improve your data collection. All of that work creates real value and I still do it.
But separate the question "how do I measure risk well?" from the question "how do I decide well when measurement is incomplete?" Those are different skills. Most security programs invest heavily in the first and almost nothing in the second.
Here's how to build the second capability:
-
Develop pre-mortems, not just risk registers. Before a decision, ask: if this goes wrong in six months, what was the failure? That question surfaces the fragile assumptions your model is hiding.
-
Track your decision quality separately from your outcome quality. A good decision can produce a bad outcome. A bad decision can get lucky. If you only review outcomes, you're learning from noise.
-
Build your judgment on directional signals when you can't have precise ones. Threat actors moving laterally in your environment give you behavioral signals before they give you forensic evidence. Train your analysts to act on the signal, not just document it for the report.
-
Accept that some risk will be managed through leadership, not models. The decisions that define your program aren't the ones where the data was clean and the answer was obvious. They're the ones where someone had to call it with incomplete information, own the call, and adapt when reality pushed back.
My mother is doing well today. She recovered more than the early prognosis suggested she would. That outcome wasn't the result of a risk model. It was the result of people making fast, directional, human decisions under pressure and staying close enough to the situation to adjust when something changed.
That's the skill no framework teaches you directly. But it's the one that matters most when the environment stops cooperating.