AI Permission Sprawl as Security Debt
161 permissions reduced to 65. When AI tools accumulate access faster than you can audit it, you have a security problem.
Jason Walker
State CISO, Florida
On February 5th, 2026, I did a permissions audit on my AI development environment. The result was humbling.
I had accumulated 161 discrete permission entries across cloud services, local infrastructure, and third-party integrations. Two API tokens were hardcoded in tracked files (gitignored, but discoverable in git history). Credentials were scattered across five different storage mechanisms with no consistent retrieval protocol. I had SSH access to servers I no longer used and API keys to services I'd depreciated but never revoked.
This is the security debt that AI development creates. Unlike traditional software engineering, where permissions are typically granted through careful IAM policies and reviewed quarterly, AI tooling accumulates permissions iteratively. You request access to a new integration, you get it via API key or OAuth token, you paste it into a config file, you move on to the next problem. Six months later, you have 161 permission entries and no way to justify 60% of them.
From my position as State CISO for Florida's 35 enterprise agencies, this is unacceptable. If I expect this discipline from my organizations' cloud architectures, I need to enforce it in my own tooling first.
The Inventory
The audit revealed three categories of sprawl:
API keys and tokens. I had 73 live credentials across 12 services: OpenAI (3 keys for different projects), Anthropic (2 keys), Hugging Face (2), Semantic Scholar (1), OpenAlex (1), Elicit (1), Gmail (2), Google Calendar (1), Todoist (1), GitHub (4), DigitalOcean (2), AWS (3), and others. Most were created "just in case" and never revoked.
Cloud access. SSH access to 4 servers, including two staging droplets I decommissioned 18 months ago. AWS IAM roles with overly broad permissions (example: one role had s3:* on all buckets). GitHub repository access on 7 organizations, some with admin rights on legacy projects I'm no longer involved with.
Local credentials. Environment variables scattered across .env files, credentials hard-coded in git history (though gitignored at HEAD), database connection strings in shell scripts, and one catastrophic mistake: an API token in a command in a Bash history file.
Service integrations. 44 third-party OAuth applications with access to Gmail, Calendar, Todoist, GitHub, and Google Drive. Many were authorization requests I'd approved during prototyping and never revoked.
The inventory itself was the problem. I couldn't answer basic questions: Which services do I actually use? Which credentials have I rotated in the last 90 days? Which API keys are tied to paid accounts versus free? Which integrations have write access?
The Analogy to Cloud IAM Sprawl
This mirrors a pattern I see constantly in enterprise environments. Organizations accumulate permissions through ad-hoc requests, development environments that become production, contractors who are never offboarded, legacy applications that nobody dares touch. The IAM policy becomes a dense tangle that nobody fully understands, creating both operational friction (nobody can quickly verify they have the access they need) and security risk (unnecessary permissions become avenues for lateral movement if accounts are compromised).
The solution in cloud environments is clear: least privilege, regular review, automated enforcement. Every service should have exactly the permissions it needs, no more. Every permission should be reviewable and tied to a business justification. Every 90 days, you should be able to name every active permission and explain why it exists.
AI development hadn't progressed to that maturity. The mindset was still "request what I might need, worry about cleanup later." Except "later" never came. Later became 161 permissions and growing.
The Cleanup
I rebuilt the credential management system from first principles:
Step 1: Inventory and classify. Pulled every API key, service connection, and permission entry. Created a spreadsheet: service name, credential type, creation date, last rotation date, last use date (where available), scope/permissions, account owner, tier (critical/high/medium/low).
Step 2: Immediate revocation. Killed everything I could identify as unused: two deprecated AWS access keys, the staging server SSH keys, four abandoned OAuth connections, 23 API keys for experimental services that never made it to production.
Result: 161 entries down to 89.
Step 3: Consolidation. Multiple API keys for the same service got consolidated to a standard pattern: one personal key, one for the trading bot (separate owner), one for scheduled tasks (automation account). One key per logical function.
Result: 89 down to 65.
Step 4: Centralization and rotation. All credentials moved into a single .env/ directory, gitignored entirely, with a retrieval script that enforces rotation every 90 days. Credentials tied to paid accounts get a second layer of protection (encrypted secret storage). The retrieval script logs every access, creating an audit trail.
Step 5: Least privilege enforcement. Reviewed remaining 65 entries for scope creep. AWS roles trimmed from s3:* to specific bucket + read-only. GitHub permissions downscaled to repository-level where possible. API keys scoped to specific endpoints instead of full account access.
Final count: 65 permissions, all justifiable, all with last-rotation dates, all tied to specific functions.
The Cost
The cleanup took approximately 12 hours: 3 hours of inventory and classification, 4 hours of revocation and testing, 3 hours of scripting and documentation, 2 hours of follow-up (testing rotated credentials, updating deployment scripts that relied on old keys).
One deployment script broke when I rotated the GitHub key. Two cron jobs needed credential updates. A local development environment lost access to one API when I removed an overly-permissioned key.
All fixable, but the point is real: security discipline has operational friction. When you're in development mode and just want to move fast, the friction is annoying. When you're operating at the State CISO level, the friction is justified.
Why This Matters for AI Governance
The AI industry is moving toward regulated environments. Autonomous AI systems in critical infrastructure (power, water, healthcare) will be subject to similar compliance requirements we now enforce on software systems. NIST is drafting AI Risk Management Framework guidance. The EU's AI Act is already in effect.
Those frameworks will include access control standards. AI systems will be required to operate under least-privilege principles. Service integrations will need to be audited. Credentials will need rotation schedules. You won't be able to say "I granted it access to AWS 18 months ago and never checked again."
The organizations that establish this discipline now—even if it seems overcomplicated for a single developer—will have a significant advantage when regulations hit. They'll have the audit trails, the rotation procedures, the permission inventory, the incident response playbooks.
The organizations that delay? They'll face crisis mode compliance work when an audit finds 300 unnecessary API keys and 47 OAuth applications with write access to mission-critical systems.
The Closing Principle
Permission sprawl is a form of technical debt. Like all debt, it compounds: every new credential makes the system harder to audit, easier to compromise, and more expensive to remediate.
Unlike code debt, you can't refactor away permissions. You can only reduce them. The discipline is: don't grant permissions you might use someday, grant permissions you use today, and retire permissions when they're no longer needed.
For individual developers and small teams, this is a hygiene practice. For large organizations and critical systems, it's a requirement. For AI systems that may eventually operate autonomously in regulated environments, it's foundational.
My system now has 65 permissions instead of 161. I understand what every credential does. I rotate them on schedule. I can prove which system uses what access. If a credential is compromised, the blast radius is bounded and traceable.
That's the standard I expect from the agencies I serve. I should expect nothing less from my own infrastructure.
If you're building with AI and you haven't audited your permissions in the last 90 days, you almost certainly have sprawl. Clean it now while you can. The technical sophistication isn't in the cleanup—it's in the discipline to maintain it.