What Change Management Looks Like When Nobody Can Follow You
Kotter's eight steps assume you have a coalition to build. Here's what leading change teaches you when you're completely alone.
Insights
Writing about enterprise cybersecurity, quantitative risk, AI operations, and leadership at scale.
Kotter's eight steps assume you have a coalition to build. Here's what leading change teaches you when you're completely alone.
A State CISO can't use a single risk arrow on a board slide. Here's how to present federated agency risk without lying to the people who need the truth.
How individually reasonable security leadership posts can aggregate into a reconnaissance package for anyone targeting your environment.
Risk frameworks assume you have time to measure before you act. Military operations, and real adversarial environments, prove that assumption wrong.
How State CISOs can feed continuous ASM telemetry into FAIR's Loss Event Frequency component to quantify risk across a multi-agency environment in real time.
Risk frameworks assume you have time to measure before you act. Operational experience taught me that assumption breaks exactly when leadership matters most.
A State CISO breaks down why logging is a risk quantification problem, not a storage problem, and what FAIR analysis reveals about detection latency costs.
A State CISO and FAIR researcher explains why CVSS measures threat capability, not risk, and what that means for prioritizing vulnerabilities across a multi-agency environment.
How to use FAIR's TEF and Vulnerability components to build a defensible ROI case for accelerating MFA rollout across a large state enterprise.
A State CISO applies FAIR risk quantification to relationship-building, proving that connection capital with agency heads is a measurable risk-reduction lever.
Most knowledge systems optimize for capture and retrieval. If you want to build a public profile, you need a distribution layer that treats every insight as a draft publication.
Why the most important thing FAIR does is not quantify risk. It translates technical risk language into the financial terms that move budgets.
Why organizing files around how you think, not how tools sync, is the difference between a system you use and one you fight.
Why funding gap percentage is a more actionable security metric than risk level ratings, and how it changes the budget conversation.
The trigger for shared module extraction is not duplicate code. It is when multiple consumers of the same data produce different answers to the same question.
When AI agents clean up your systems, they delete what they can't contextualize. The result looks like a gap that needs filling, not damage that needs undoing.
Why outcome-driven execution beats task completion. An operating principle from building systems that actually work.
Why starting solo and escalating to teams beats launching full swarms. A framework from building AI-augmented operations at enterprise scale.
Reducing AI instruction sets by 55% without losing capability. Lessons from engineering a personal operating system.
When AI tools accumulate access faster than you can audit it, you have a security problem. Here is how to fix it.
In complex AI agent systems, the most dangerous failures are not in what breaks. They are in what was never wired.