Your Attack Surface Is a FAIR Input, Not a To-Do List
How State CISOs can feed continuous ASM telemetry into FAIR's Loss Event Frequency component to quantify risk across 35 agencies in real time.
Jason Walker
State CISO, Florida
Picture this: Agency A just spun up a public-facing S3 bucket misconfigured to allow anonymous reads. Agency B's dev team deployed a containerized app last Tuesday with an exposed admin panel, no auth, port 8443 open to the world. Agency C hasn't patched a known exploitable vulnerability in their VPN concentrator for 61 days. You have visibility into all three because your ASM tooling flagged them this morning.
Now the question: what do you do with that information?
Most security programs answer that question the same way. They open a ticket, assign a severity, set a due date, and call it remediation workflow. The attack surface finding becomes a hygiene item. A checkbox. A metric in a dashboard that shows green when the count goes down.
That framing is wrong, and it costs you something important: the ability to answer the question your governor, your CFO, and your agency heads actually care about. Not "how many open findings do you have?" but "what is the organization's exposure to loss right now, and how is it changing?"
FAIR gives you the vocabulary to answer that question. But most people who implement FAIR treat it as a modeling exercise, something you do quarterly with a spreadsheet, a few subject matter experts in a room, and a Monte Carlo simulation you run once. That approach produces a snapshot, not a signal. And in a multi-agency environment like Florida's, where 35 agencies span the full spectrum from sophisticated cloud-native shops to agencies running on-premises infrastructure older than some of my analysts, a quarterly snapshot is already stale before you finish presenting it.
Here is the core idea: your continuously discovered external attack surface data is a real-time proxy for Threat Event Frequency.
Let me explain what I mean. In FAIR's ontology, Loss Event Frequency is a function of two components: Threat Event Frequency (how often a threat agent acts against an asset) and Vulnerability (the probability that the action results in a loss). Most FAIR practitioners treat Threat Event Frequency as a judgment call informed by threat intelligence feeds, industry data, and expert elicitation. That works for a static enterprise. It breaks down when you're governing 35 agencies with wildly inconsistent cloud maturity and a continuous deployment cadence that changes the exposure landscape daily.
But here is what ASM tooling actually measures: the external-facing surface area that a threat agent could act against. Every exposed port, every unauthenticated service, every misconfigured storage bucket, every certificate with a weak cipher suite is a surface where a threat action becomes probabilistically possible. When you aggregate that data at the asset-class level across agencies, you are not looking at a to-do list. You are looking at a continuously updating estimate of the conditions under which threat events are likely to occur.
That is Threat Event Frequency data. You just haven't been plugging it into the right model.
The operationalization works like this. You establish baseline exposure profiles by agency tier. Tier one agencies, those with mature cloud governance, automated remediation pipelines, and dedicated security staff, maintain a predictable surface area. Deviation from baseline is your signal. A spike in exposed management interfaces or a new external-facing service in a tier-one agency triggers a FAIR re-run for that asset class because the underlying frequency assumption has changed.
Tier three agencies, the ones with the oldest infrastructure and the smallest IT teams, have a higher baseline surface area. That is not a failure state you're trying to eliminate overnight. It's a risk condition you need to quantify honestly so you can make rational resource allocation decisions. When a tier-three agency's exposed attack surface expands by 30 percent over 45 days because they migrated a workload to cloud without coordinating with central security, that expansion carries a real probabilistic weight on Loss Event Frequency. Model it.
The governance implication is significant. When you can show agency leadership that their failure to remediate three critical ASM findings over 90 days has shifted their modeled annualized loss exposure by X dollars, you have a different conversation than "you have three overdue tickets." One conversation gets you a priority ranking on a remediation list. The other gets you a budget request approved or a risk acceptance decision documented at the agency head level, where it belongs.
I've been developing this approach as part of my doctoral research on FAIR operationalization in public sector environments, and the friction point is always the same: organizations treat FAIR as a communication tool rather than a feedback loop. They model risk to explain decisions already made. The more powerful use is to close the loop between continuous discovery data and the probabilistic inputs that drive your risk model, so that changes in the environment surface immediately as changes in quantified risk.
The practical starting point is not a full FAIR implementation. It is three steps:
-
Map your ASM data taxonomy to FAIR asset classes. Every finding your ASM tool generates should be classifiable by asset type, exposure category, and agency. This is the data normalization step most teams skip, and skipping it means your findings live in a different namespace than your risk model.
-
Establish agency-level exposure baselines and assign Threat Event Frequency ranges to deviation thresholds. You do not need precision. You need defensible ranges that move when the surface area moves. A 20 percent expansion in external exposure on a payment-adjacent system should shift your TEF estimate upward. Model that relationship explicitly.
-
Build a reporting cadence that surfaces FAIR delta, not just finding counts. When you brief your CAC or your agency CISOs, the question should not be "how many findings closed this quarter?" It should be "how did quantified risk exposure change, and why?"
The broader point is this: attack surface management was sold as a visibility tool. Visibility is table stakes. The return on investment comes when you treat that visibility as a continuous feed into your risk quantification infrastructure. In a heterogeneous multi-agency portfolio, that connection between discovery data and probabilistic risk modeling is the only way to make rational, defensible decisions about where to apply finite resources.
Your ASM tool is generating FAIR inputs every hour. Start using them that way.