Flying Blind in a Vulnerability Storm: What Instrument Pilots Know That CISOs Need to Learn
When AI floods your vulnerability pipeline with noise, the leadership failure isn't moving too slow. It's moving without a decision framework.
Jason Walker
State CISO, Florida
Picture a pilot who has been flying visual conditions for years. Clear skies, horizon visible, gut instinct reliable. Then one day the clouds close in, the horizon disappears, and every sensation in the seat of his pants tells him the plane is banking left. He banks right to compensate. The plane actually was level. Now it isn't. Six minutes later, investigators find the wreckage.
This is spatial disorientation. And right now, a version of it is killing security programs across the country.
The AI vulnerability moment we are living through is not primarily a patching problem. It is a decision framework problem. The tools that find vulnerabilities have outpaced the tools that help leaders decide which ones matter. When the signal-to-noise ratio collapses, experienced professionals start trusting the wrong inputs, the ones that feel urgent rather than the ones that quantify risk. That instinct gets people killed, in aviation and in enterprise security.
Let me explain what changed.
For years, vulnerability management ran on a model where human researchers found real bugs at a pace that, while never comfortable, was at least legible. You had a backlog. You had a burn rate. You could calculate rough coverage. The math was bad, but it was followable math.
AI-assisted vulnerability discovery broke that model. Not gradually. Fast. The research community watched it happen over the past two years, but the moment it became everyone's problem was when mainstream coverage started. Now board members, audit committees, and legal counsel are asking questions that security teams don't have clean answers to, because the underlying reality is genuinely new and the instrumentation hasn't caught up.
Here is what genuinely new looks like in practice. A state enterprise managing hundreds of thousands of devices and dozens of agencies worth of software does not face a bigger version of last year's vulnerability backlog. It faces a different kind of problem. The number of findings is increasing. The quality of findings is also increasing. The researchers who used to burn four days to chain two bugs into a working exploit can now do it faster. That compression of time matters because the window between a finding being discovered and it being weaponized is shrinking, and the window between a finding entering your pipeline and your team being able to contextualize it is not shrinking at the same rate.
That gap is where programs fail.
The Marines call it the fog of war. Carl von Clausewitz called it friction. Pilots call it IMC, instrument meteorological conditions. Different domains, same problem: the information you need to act correctly is degraded, delayed, or contradicted by noise, and you have to act anyway. The discipline that survives these conditions is not heroism. It is methodology.
A pilot in IMC does not fly by feel. She trusts calibrated instruments. She has a framework for which instruments to scan in which order, how to interpret conflicting readings, and when to declare an emergency versus when to hold heading and work the problem. She trains that framework so deeply that it fires under stress without deliberation. The worst thing she can do is improvise.
Security leaders managing an AI-accelerated vulnerability backlog need the equivalent of instrument training. And most of us do not have it, because we built our instincts in an environment that no longer exists.
What does a vulnerability instrument panel actually look like? Three things.
First, asset criticality quantified before findings arrive. Not after. If you do not know which systems are mission-critical, which are perimeter-facing, which process sensitive data, you cannot triage under pressure. You will triage by whoever is loudest in the meeting. That is the security equivalent of flying by feel.
Second, a decision rule for the signal-to-noise problem. This is the one most programs lack. When AI-generated findings flood your pipeline, some are brilliant and real. Some are hallucinated garbage. You need a documented protocol for how your team evaluates them, what constitutes sufficient validation, and who has the authority to escalate versus park a finding. Without that rule, every new finding becomes a fresh argument, and arguments burn time you don't have.
Third, a pace that matches your burn rate to your ingestion rate. This sounds obvious. It isn't practiced. A program that ingests three hundred new findings a week but can only meaningfully act on fifty is not a slower version of a functioning program. It is a program actively accumulating strategic debt. The pile that isn't being touched is a liability, not a backlog. The honest leadership response to that math is to change either the numerator or the denominator. Ignore it and you're flying by feel again.
Here is the part that cuts against the current conversation I keep seeing. The answer to AI-accelerated vulnerability discovery is not panic-buying more tools. It is not doubling the number of findings you ingest. The board pressure to "do something fast" in response to this moment is real, and I understand it, but fast without a framework is how you end up patching the wrong things at the wrong time while the actual exposure sits untouched.
Steady hands matter more than fast hands right now.
The programs that will navigate this well are the ones that resist the impulse to treat every new capability announcement as an emergency requiring heroic improvisation. They will instead ask: do our instruments work? Do we know what is critical? Do we have a decision rule for triage? Is our remediation rate honest?
If those four answers are yes, the AI vulnerability storm is a harder version of a problem you already have a framework for. If those answers are no, the storm is not your primary problem. The spatial disorientation is.
Trust your instruments. Not because the instruments are perfect. Because flying by feel in IMC is how programs, and planes, go down.