All Insights
6 min read

When You Can't Measure First: What Caregiving Taught Me About Risk Leadership

Risk frameworks assume you have time to measure before you act. Caregiving — and real adversarial environments — prove that assumption wrong.

risk managementleadershipcybersecurity
JW

Jason Walker

State CISO, Florida

My mother had her second stroke on a Tuesday. By Wednesday morning, I was in a hospital conference room with a neurologist who handed me a discharge plan built on probabilities I couldn't verify, timelines that kept shifting, and a definition of "stable" that changed every six hours. I had to make decisions about her housing, her medication schedule, her speech therapy cadence, and her daily care — all of it consequential, none of it fully measurable, and none of it waiting for me to build a better model.

I was, at the same time, managing cybersecurity operations across 35 Florida state agencies and writing doctoral research on FAIR risk quantification. I knew the frameworks. I had read the definitions. And I kept running into the same quiet problem: every formal model of risk I had studied assumed, at some level, that you get to measure before you act.

That assumption is more dangerous than most practitioners realize.

The Measurement Trap

The canonical definitions of risk — from ISO, from NIST, from the actuarial tradition — converge on a formula: likelihood times impact. Measure the probability. Measure the consequence. Prioritize. Act.

It's a good model. For the right environment.

The problem is that the model smuggles in a hidden premise: that your data is adequate, that your threat environment is stable enough to sample, and that time is available for analysis before commitment. In a well-instrumented enterprise with mature telemetry, those conditions hold often enough that the model earns its credibility.

But two environments consistently violate those conditions. The first is genuine adversarial pressure, the kind where a threat actor is actively changing behavior faster than your detection pipeline can characterize it. The second is human crisis, where the variables are biological, emotional, and social, and where the feedback loop between action and outcome is slow, noisy, and irreversible.

Both environments punish you for waiting to measure.

What Caregiving Surfaces

After my mother's first stroke, I thought I could manage her care the way I manage risk at the enterprise level. I would gather data, identify the highest-probability failure modes, and allocate resources accordingly. I would build a model.

Her aphasia made communication unreliable. Her fatigue masked symptoms that turned out to be significant. Her doctors gave me ranges, not numbers. The home health aide's observations were subjective. The data was incomplete, lagging, and sometimes contradictory — and it stayed that way. It didn't improve with time. It changed character.

What I learned is that irreducible uncertainty is not the same as unmeasured uncertainty. These are different problems, and conflating them leads to the wrong response.

Unmeasured uncertainty means you don't have the data yet. The solution is better instrumentation, faster pipelines, more rigorous methodology. Given enough time and resources, you can reduce the uncertainty and then decide.

Irreducible uncertainty means the system itself resists stable characterization. The solution is not better measurement. The solution is a decision-making posture that can operate without complete data — and a leader who can hold that posture without flinching.

The Adversarial Environment Problem

Bring this back to cybersecurity and the parallel is exact.

A threat actor conducting a multi-stage intrusion is not going to pause while your SOC completes a formal risk assessment. The attack surface across 35 agencies, 107,000 employees, and 200,000 devices does not hold still. The threat landscape shifts faster than quarterly risk reviews can capture. When we respond to an active incident, we are not operating in measurement-first mode. We are making sequential decisions with incomplete information, each one constraining the next, all of them consequential.

The FAIR model I research is genuinely useful. Quantified risk enables better board communication, better budget justification, better prioritization of controls. I am not arguing against measurement. I am arguing against measurement as a prerequisite for action in every context.

The practitioners I have seen struggle most in crisis are the ones who were trained exclusively in measurement-first thinking. When the model breaks down, they freeze. They go looking for more data. They wait for the uncertainty to resolve. And the adversary, or the stroke, or the cascading failure, moves faster than they do.

What Leadership Actually Requires

The frameworks get the mechanics right but they underspecify the human requirement. Managing risk well, at the enterprise level and at the bedside, requires three things that no quantitative model provides.

First: a clear-eyed sense of what is actually irreducible versus what is just unmeasured. You have to know the difference, because the response is different. If you can get better data in time to matter, go get it. If you cannot, stop waiting and decide.

Second: calibrated confidence under incomplete information. This is a trainable skill, not a personality trait. It means being able to say "given what I know right now, this is the right call" without pretending you know more than you do and without being paralyzed by what you don't know.

Third: the willingness to be wrong and correct fast. In adversarial and human environments, no single decision is final. The advantage goes to whoever can iterate faster. My mother's care plan changed dozens of times across two years. Our incident response playbooks change after every major exercise. The ability to update without ego is not a soft skill. It is an operational capability.

The Practitioner Takeaway

If you manage risk in a high-stakes environment, build your measurement capability. Learn FAIR. Build your control frameworks. Instrument your environment. All of that matters.

And then build the other capability: the ability to lead without a complete model.

Identify the decisions in your environment that will not wait for better data. Prepare your team to make those calls with intellectual honesty about what they know and what they don't. Create after-action processes that treat fast iteration as a win, not an admission of failure.

Risk frameworks are tools. The leadership posture that knows when to use the tool and when to set it down and act anyway — that is the harder thing, and the more important one.

My mother recovered enough to live independently again. Not because I built the right model. Because when the model ran out, I kept making calls.