CVSS Is Not a Risk Score (And Building Remediation Queues Like It Is Will Get You Burned)
A State CISO and FAIR researcher explains why CVSS measures threat capability, not risk, and what that means for prioritizing vulnerabilities across 35 agencies.
Jason Walker
State CISO, Florida
Picture this scenario: your vulnerability management platform surfaces 47,000 open findings across 35 state agencies. Your team sorts by CVSS score, descending. Critical findings go to the top. The remediation queue gets pushed to agency security officers. Everyone nods. Process complete.
You just made a significant prioritization error, and the math will not forgive you later.
What Most People Get Wrong
CVSS is treated like a risk score. It is not. It was never designed to be. CVSS is a threat capability descriptor. It answers a narrow question: how exploitable is this vulnerability in isolation, given certain attacker conditions? It scores attack vector, attack complexity, privileges required, user interaction, and scope. All of that describes the threat side of the equation. It tells you almost nothing about what you stand to lose if exploitation succeeds.
The distinction matters enormously. Risk, in any serious quantitative model, is a function of two things: the probability of a loss event and the magnitude of that loss. CVSS gives you a partial proxy for the first variable and ignores the second entirely. When you sort your remediation queue by CVSS score, you are sorting by attacker convenience. You are not sorting by organizational impact.
For a single agency managing homogeneous assets, that gap might be tolerable. You could argue that a high CVSS score on any system is worth attention. But when you are the State CISO managing 35 agencies with radically different missions, asset profiles, and downstream consequences, treating CVSS as the prioritization signal is not just imprecise. It is backwards.
The Core Insight: Asset Value Heterogeneity Inverts the Queue
Run FAIR analysis against a realistic state government environment and you hit a problem immediately: asset value is not evenly distributed across agencies, and loss magnitude varies by orders of magnitude depending on which system gets compromised.
A CVSS 9.8 remote code execution vulnerability on a public-facing web server hosting a tourism brochure is not the same risk as a CVSS 7.2 finding on an authentication system sitting upstream of the Department of Children and Families case management database. CVSS tells you the first is more dangerous. FAIR tells you the second is more expensive.
In FAIR terms, the relevant variables are Loss Event Frequency (how often a threat agent will act against this asset, at what capability level) and Loss Magnitude (what does the loss actually cost, across primary and secondary loss categories). When you work through that model honestly, two things happen. First, the loss magnitude calculation forces you to think about asset value in a way that CVSS never prompts. Second, you start identifying concentrations of risk that CVSS scoring completely obscures.
Across Florida's 35 agencies, the asset value landscape is not uniform. A system at the Agency for Health Care Administration that touches Medicaid payment data carries a loss magnitude profile that includes regulatory exposure, federal audit risk, beneficiary harm, and remediation costs that compound quickly. A similar technical vulnerability on a less sensitive system does not. CVSS scores both the same if the technical characteristics match. FAIR does not.
What I See in Practice
We track roughly 202,000 devices across state government. The agencies sitting on that inventory are not interchangeable. Some process protected health information. Some run infrastructure that supports law enforcement. Some handle financial transactions that, if disrupted, stop payments to vulnerable populations. A few are primarily administrative with limited sensitive data exposure.
When I look at our vulnerability data and apply loss magnitude weighting rather than raw CVSS scoring, the remediation queue changes substantially. Findings that sit at CVSS 6 or 7 on systems with concentrated asset value and high secondary loss potential (regulatory fines, federal funding clawbacks, reputational harm that affects public trust in state services) climb past critical CVSS findings on isolated, low-value infrastructure. That inversion is not an edge case. It is a regular feature of any environment with real asset heterogeneity.
The agencies most at risk are not always the ones shouting the loudest about their critical finding counts. Sometimes they are the ones with moderate CVSS distributions sitting on systems where the loss magnitude is catastrophic if exploitation succeeds.
The Measurement Problem
The pushback I hear most often is that FAIR-based modeling requires more data than most organizations have. That is partially true and mostly irrelevant. You do not need actuarial precision to make better decisions than CVSS-sorted queues produce. You need three things.
First, a rough asset classification that captures sensitivity tier and mission criticality for your most important systems. You do not need this for every device. You need it for the systems where getting the prioritization wrong is expensive.
Second, a basic loss magnitude model. In state government, that means identifying which systems carry regulatory exposure (federal requirements like HIPAA, FedRAMP dependencies, state statute obligations), which systems have downstream blast radius that crosses agency boundaries, and which systems, if compromised, produce secondary losses that are hard to bound.
Third, a willingness to separate threat capability scoring (CVSS) from risk prioritization (FAIR-based weighting). Use CVSS as an input to the threat frequency estimate. Do not use it as the final answer.
What You Should Do Differently
Stop building remediation queues from CVSS scores alone. CVSS belongs in your workflow as one data point, sitting inside a broader model. Here is a practical starting point for a multi-agency environment.
Tier your assets by loss magnitude potential before you look at vulnerability scores. High-magnitude systems get a multiplier applied to their remediation priority. The multiplier reflects the real cost of a loss event, not the theoretical elegance of the attack vector.
When you brief leadership on vulnerability posture, frame findings by risk exposure, not finding count. Telling a legislative oversight body that you have 12,000 critical CVSS findings communicates less than telling them that three agencies have high-magnitude systems carrying unacceptable loss event frequency given current control gaps.
Run a quarterly sanity check: pull the top 20 findings by CVSS and the top 20 by FAIR-weighted risk score. If those lists look identical, your asset classification is probably not granular enough. If they look meaningfully different, you are doing it right.
CVSS is a useful tool. It is not a risk quantification framework. Treating it as one is a category error, and in a multi-agency environment, category errors in prioritization have consequences that compound across an entire government's attack surface. Build the model. Weight the assets. Fix the queue.