Preflight Doesn't Ask If It Will Rain
Consequence-first risk thinking isn't a FAIR innovation. Any pilot or Marine could tell you: you plan for the failure before it happens.
Jason Walker
State CISO, Florida
Here is the thing about a preflight checklist. It does not ask whether the weather will turn bad. It asks what you do when it does.
I learned to fly before I learned to run a security program. The discipline that stuck was not aerodynamics or airspace. It was this: a professional does not spend the preflight wondering if the engine will fail. A professional confirms what happens to the aircraft, and to the people in it, if the engine fails anyway. You locate the nearest suitable landing area before you need it. You brief your passengers before the emergency. You build the decision tree before your hands are shaking.
That is consequence-first thinking. And it is not a new idea. FAIR practitioners did not invent it. The military did not invent it. Aviation did not invent it. But somewhere between the cockpit and the enterprise security operations center, we collectively forgot it.
Most security programs I have encountered are obsessed with likelihood. How probable is this attack vector? What is the threat actor's capability rating? Where does this control sit on a five-by-five risk matrix? The matrix is seductive because it feels rigorous. It has colors. It has numbers. It implies precision. But a likelihood-first framework answers the wrong question. It asks whether something will happen instead of asking what it costs when it does.
I spent years in military aviation environments where "assume the adversary is already inside" was not a thought experiment. It was the operating doctrine. When you are planning around a capable, motivated adversary in a contested environment, you do not spend your planning cycle debating intrusion probability. You spend it engineering your response to the breach you have already assumed. What does the enemy know if they have our communications? What do they control? What do we lose? How fast can we restore? Those questions drive action. Likelihood estimates drive PowerPoint.
The shift I am describing is not subtle. It is a complete reorientation of the risk management effort. You stop asking "will this vendor get compromised?" and start asking "what breaks inside our walls if they do?" You stop building vendor scorecards and start mapping trust relationships. The vendor's security posture matters less than the access pathway the vendor owns into your environment and how far a failure can travel across that boundary before you can cut it.
In state government, I operate across dozens of agencies and hundreds of thousands of devices. Some of those agencies depend on third-party vendors for mission-critical functions: benefits distribution, licensing, revenue processing. The vendor community ranges from mature enterprises to small shops running decade-old software on a contract renewal cycle nobody reviewed carefully. I cannot assess my way to safety. I cannot scorecard every vendor into compliance and call the risk managed.
What I can do is ask the consequence question for each critical dependency. If this vendor goes dark tomorrow, what stops working? How long can the agency operate in degraded mode? What is the manual backup process, and when did anyone last test it? Does the contract include liability language tied to actual impact, or is it a checkbox that says the vendor will notify us within 72 hours of a breach? The answers to those questions tell me far more than any security rating service output.
The reason most organizations stay trapped in likelihood-obsessed frameworks is not stupidity. It is that their leaders have never personally operated in environments where the failure was the starting assumption. If you have only ever worked in offices and boardrooms, the failure feels like a hypothetical. You optimize to prevent it because prevention feels controllable. You pour money into detection tools that catch attackers before they reach the crown jewels, and you measure success by the incidents you stopped.
But if you have sat in a cockpit running engine-out procedures in a simulator until they are reflexive, or planned a mission knowing your communications could be jammed at the worst moment, the failure is not hypothetical. It is the ground state. Prevention is still worth doing, but you never mistake prevention for the whole job. Resilience is the job.
Resilience means designing systems so that failure is containable. It means segmenting OT networks not because you believe the perimeter will hold, but because when it does not hold, you want the blast radius to be a room and not a building. It means running tabletop exercises where the scenario does not start with "the attacker is attempting to breach the perimeter" but with "the attacker is already inside and has been for 60 days." It means knowing, before you need to know, which functions can absorb a vendor failure and which ones will collapse.
The practical tool that closes the gap between likelihood-thinking and consequence-thinking is financial quantification. When I can express a risk in terms of real loss exposure rather than a red/yellow/green rating, the conversation changes entirely. A red rating on a vendor scorecard produces a finding. A finding that reads "this vendor relationship carries an expected loss exposure of $4 million per incident given current access pathways and segmentation gaps" produces a budget conversation, a contract renegotiation, and an architecture review. Those are outcomes. A color-coded chart is a document.
Quantification also forces honesty about dependency. You cannot assign a dollar figure to vendor failure without first admitting that you depend on the vendor in ways you cannot immediately replace. That admission is uncomfortable. Organizations that build their TPRM programs around scorecards can avoid the admission because the scorecard measures the vendor, not the dependency. Consequence-first thinking forces you to look at your own architecture and ask what you built that can survive the failure of the things you rely on.
I run a state security program with real constraints, real agencies, and real operational dependencies I did not choose. I am not eliminating those dependencies. Neither is anyone else in a complex enterprise. The goal is not a world without vendor risk. The goal is a world where vendors fail gracefully, where the failure path is designed and tested before the incident, and where the people making architectural decisions understand that the preflight question was never about the weather.
The weather is coming. You plan accordingly.