FAIR Isn't a Risk Model. It's a Translation Layer.
Why the most important thing FAIR does isn't quantify risk — it's translate technical risk language into the financial terms that move budgets.
Jason Walker
State CISO, Florida
There's a conversation that plays out in government budget hearings across the country with remarkable consistency. A CISO stands before a legislative committee and says something like: "We have a HIGH risk of ransomware attack across our agency network." A legislator looks up and asks: "And what does that mean for our budget?" The CISO, trained in technical risk assessment, explains the threat actor profile, the attack surface, the control gaps. The legislator nods politely and asks again: "But how much money are we talking about?"
This is not a communication failure. It's an architectural one. The language of technical risk assessment was not designed to answer budget questions.
FAIR — Factor Analysis of Information Risk — is my dissertation research focus, and the more I work with it, the more I believe its primary value is not what it's usually advertised to be. It's not primarily a better risk model. It's a translation layer.
What Gets Lost in the Standard Translation
Standard risk frameworks — NIST, ISO 27001, CIS Controls — produce risk ratings. HIGH, MODERATE, LOW. Critical, High, Medium, Low. These ratings are useful for practitioners managing security programs. They communicate relative severity and help sequence remediation priorities.
They are nearly useless for budget conversations.
A legislative committee member making appropriations decisions has no mental model for what HIGH risk means in dollar terms. They know what a $4M budget ask means. They know what an 80% probability of loss means. They know what an annualized cost-benefit ratio means. These are the units in which budget decisions get made, and standard risk frameworks don't produce them.
The result is a structural translation gap. Security leaders end up translating on the fly — "HIGH risk means if we get hit, it could cost millions" — without the rigor or credibility that quantitative analysis would provide. The legislator is being asked to make a financial decision based on non-financial inputs. The ask rarely lands the way it should.
FAIR Forces the Translation
FAIR works by decomposing risk into its component parts — threat event frequency, vulnerability, loss magnitude — and expressing them as probability distributions. The output is not a rating. It's a range of annualized loss expectancy in dollar terms.
"We have a HIGH ransomware risk" becomes "our annualized loss expectancy from ransomware ranges from $1.8M to $6.4M, with a most likely outcome around $3.1M." This is a fundamentally different statement. It anchors the risk in financial terms that map directly to budget logic. A $600K investment in endpoint detection that reduces that loss expectancy by 70% is now a clear ROI calculation, not a judgment call about relative risk levels.
The translation isn't just cosmetic. It changes the quality of the decision. A budget committee choosing between a $600K security investment and a $600K road repair project can now make that comparison on comparable terms — expected financial impact — rather than making a trust-based bet on which department head's risk language to believe.
The Challenge in Public Sector Environments
My dissertation applies FAIR specifically to state government cybersecurity, and the application surfaces a challenge that private sector practitioners rarely face: sparse loss data.
FAIR works best when organizations have historical data on incident frequency and financial impact. Private sector firms — particularly financial services and healthcare — have increasingly rich incident databases to draw on. State agencies, with smaller attack surfaces and less public reporting incentive, have thinner records.
This isn't a fatal problem, but it does change the methodology. Public sector FAIR analysis has to rely more heavily on threat intelligence data (CISA, MS-ISAC, sector-specific ISACs), expert elicitation with practitioners who have institutional knowledge of attack patterns, and reference data from comparable organizations in adjacent sectors.
The resulting distributions have wider confidence intervals than private sector analyses. But wider intervals are still enormously more useful than categorical labels. A range of $500K to $8M loss expectancy, even with significant uncertainty, still produces better budget conversations than "HIGH risk" — because it anchors the conversation in the right units.
What Changes When You Lead With Numbers
I've been using FAIR-informed framing in Florida budget conversations for the past year, and the difference in stakeholder engagement is observable. The questions change. Instead of "is this really that serious?" — a question that invites a debate about risk severity — the question becomes "what's driving the variance in that loss range?" — a question that invites a conversation about which controls would reduce exposure most efficiently.
That shift matters enormously for public sector leaders. We operate in an environment where cybersecurity competes with highly visible services — road maintenance, social services, education — for limited appropriations. The competition is fundamentally financial. Framing our asks in financial terms is not a stylistic choice. It's a strategic necessity.
The CAC recommendations analysis I presented last year identified 81 tracked recommendations for improving Florida's statewide cybersecurity posture. Twenty-six of those recommendations were blocked by dedicated funding constraints. Not by technical barriers, not by policy obstacles, but by the simple absence of allocated dollars. FAIR doesn't magically produce those dollars. But it changes the conversation that produces them.
When you can walk into a budget hearing and say "this specific control investment is expected to reduce annualized loss exposure by $X, with 80% confidence, over a 3-year horizon" — and back that up with documented methodology — you are no longer asking a legislator to trust your technical judgment. You are giving them a financial instrument to evaluate. That's the conversation that moves budgets.
The Broader Implication
FAIR's value proposition is usually framed around precision: quantitative risk analysis is more precise than qualitative ratings. That's true but secondary. The more important value is translation: quantitative analysis speaks the language of financial decision-making in a way that qualitative ratings do not.
Every organization has a translation gap between the people who understand the risk and the people who control the budget. FAIR closes that gap by forcing both sides to work in the same units. The security practitioner has to be rigorous about loss estimates. The budget decision-maker has to engage with probability and expected value.
That's not just a better risk management practice. It's a better governance practice. And in public sector environments where security investment has historically been underfunded despite genuine risk — that difference in governance quality has real consequences for the people agencies serve.