Can Versus Should: The Permission Gap in Agentic AI
Granting an AI agent access to your shell isn't a safety decision. It's a capability decision. Most people don't know the difference.
Jason Walker
State CISO, Florida
Before every flight, I run a preflight checklist. Not because I need to verify that the plane is physically capable of flying. The engine works. The controls respond. The fuel is there. The checklist is not asking whether the aircraft can fly. It is asking whether I have verified that flying right now is safe given everything else in the system: weather, notams, airspace, my own condition, the load, the route. Capability and authorization are two completely different questions, and conflating them is how people die.
Most users granting an agentic AI tool access to their shell, email, files, and browser are answering the wrong question. They are answering "can this tool use these resources?" when the question they actually needed to answer first was "have I verified it is safe for this tool to use these resources given everything else in the system?"
That distinction is the entire ballgame.
Here is what mature risk thinking looks like in practice. You encounter a new tool. The tool promises value. The tool requires access to do its job. You grant the access because the access makes sense if the tool does what it claims. That sequence feels logical because it is, if you treat permission as a safety gate. Most people treat permission as a capability gate. They ask: do I have the right to give this access? If the answer is yes, they give it. Done.
But a capability gate and a safety gate are not the same structure. A capability gate answers: can I do this? A safety gate answers: should I do this given what I know about the entire system? The preflight checklist is a safety gate. It does not ask whether you are physically capable of starting the engine. You clearly are. It asks whether starting the engine is the right next action given the complete picture.
When a tool like a locally-run AI agent asks for shell access, the capability question is trivial. You own the machine. You have the credentials. You can grant it in thirty seconds. The safety question is categorically harder. What else has access to that shell? What data flows through that email? Who controls the update pipeline for that tool? What happens if the tool is compromised, or worse, if it is working exactly as designed by someone whose interests are not yours?
Most people never get to those questions because they already said yes at the capability gate.
I have seen this pattern across government environments for years. An agency wants to deploy something. The legal review clears it. Procurement clears it. IT confirms the tool can be installed. Everyone declares it approved and moves forward. What nobody did was a preflight. Nobody asked whether installing this thing now, in this configuration, on this network, connected to these other systems, is safe given the complete picture. The capability gates all opened. The safety gate was never built.
The problem gets worse with agentic AI because the blast radius scales with the access. A traditional tool that breaks or gets compromised breaks in one place. An agent with shell, email, calendar, and browser access breaks across your entire information life simultaneously. And if that agent is communicating with other agents, the failure propagates at machine speed across systems you do not control and cannot inspect in real time. You will not catch it during the incident. You will catch it after, when you are doing forensics on why your API keys walked out the door.
The security industry tends to respond to this by trying to lock down the tools themselves. Better sandboxing. Better monitoring. Behavioral detection. Those are not wrong, but they are treating the symptom. The root cause is that users, developers, and organizations are making capability decisions while thinking they are making safety decisions. Until that cognitive error gets corrected, the tools do not matter much. Someone will always find a way to ask for more access than they need, and someone else will always say yes because they technically can.
Here is what the preflight discipline looks like applied to agentic AI before you authorize anything:
Start with the data model, not the tool. What data will this agent touch? Follow the data, not the feature list. Email sounds benign until you realize it includes every password reset link, every financial notification, every HR communication you have ever received.
Map the trust chain. Who built the tool? Who controls the update server? What dependencies does it pull? If any link in that chain is opaque, you do not have enough information to make a safety decision. You only have enough information to make a capability decision.
Separate what the tool needs from what the tool wants. Every permission request should be interrogated. Shell access for a scheduling assistant is not a legitimate technical requirement. It is scope creep. Push back.
Ask the failure mode question out loud. If this tool is compromised tomorrow, what is the worst thing that happens? If the answer makes you uncomfortable, the access model needs to change before you deploy, not after.
Accept that capable is not authorized. This is the hard cultural shift. You can install it. You can grant it. You have admin rights. None of that means you have completed the safety review. The preflight checklist does not care that you are physically capable of starting the engine.
I am not arguing against agentic AI. The technology is genuinely useful and the use cases are real. But right now we are in a period where the industry is moving at the speed of capability decisions while the risk thinking is still catching up. The tools are outpacing the checklists.
Every pilot knows the trap. The weather looks manageable. The schedule is tight. The plane is ready. Every physical signal says go. The preflight checklist is the mechanism that forces you to slow down and ask the system question rather than just the capability question. Without it, your confidence that everything looks fine is precisely what kills you.
Build the checklist. Run it before the rotors turn.