Agentic Trust
Framework
Zero Trust Governance for Autonomous AI Agents
"Never trust, always verify" applied to AI agents that act on your behalf.
What Is the Agentic Trust Framework?
The Agentic Trust Framework (ATF) is an open specification for governing AI agents using Zero Trust principles. It provides organizations with a practical, implementable approach to deploying autonomous AI systems that can pass security audits while delivering business value.
Unlike traditional security frameworks designed for human users and static systems, ATF addresses the unique challenges of AI agents:
Agents act autonomously
They make decisions without human approval
Agents learn and adapt
Their behavior changes over time
Agents access sensitive systems
They need credentials, data, and permissions
Agents interact with each other
Creating complex trust relationships
ATF answers the fundamental question every security team asks: How do we let AI agents do their jobs without creating unacceptable risk?
The Five Core Elements
Zero Trust principles applied to AI agents across five dimensions
Identity
“Who are you?”
Authentication, authorization, and session management for AI agents.
What It Means for Agents
Verify the agent's credentials, permissions, and authorization chain
Behavior
“What are you doing?”
Observability, anomaly detection, and intent analysis of agent actions.
What It Means for Agents
Monitor actions in real-time, detect anomalies, ensure intent alignment
Data Governance
“What are you eating? What are you serving?”
Input validation, PII protection, and output governance controls.
What It Means for Agents
Validate inputs, protect sensitive data, govern outputs, prevent poisoning
Segmentation
“Where can you go?”
Access control, resource boundaries, and policy enforcement.
What It Means for Agents
Enforce boundaries, limit blast radius, control resource access
Incident Response
“What if you go rogue?”
Circuit breakers, kill switches, and containment procedures.
What It Means for Agents
Circuit breakers, kill switches, containment, and recovery
Agent Maturity Model
A progressive approach to granting autonomy based on demonstrated trustworthiness
Intern
Observe + Report
Fully supervised, no autonomous actions
Read-only
Continuous oversight
Lowest
Scope 1 = Intern
2 weeks
EXAMPLE USE CASE
An agent that monitors security logs and flags suspicious patterns for analyst review.
Junior
Recommend + Human Approves
Limited autonomy, frequent check-ins
Suggestions only
Approval required for all actions
Low
Scope 2 = Junior
4 weeks
EXAMPLE USE CASE
An agent that drafts customer service responses for human review before sending.
Senior
Act + Notify
Broad autonomy, exception-based oversight
Executes within guardrails
Post-action notification
Moderate
Scope 3 = Senior
8 weeks
EXAMPLE USE CASE
An agent that automatically scales cloud infrastructure based on load, notifying the ops team of changes.
Principal
Autonomous Within Bounds
Full autonomy within defined boundaries
Self-directed within approved domain
Strategic oversight, edge case escalation
Highest governance requirements
Scope 4 = Principal
Ongoing
EXAMPLE USE CASE
An agent that autonomously triages, contains, and remediates security incidents within defined playbooks, escalating novel threats to the human SOC team.
Promotion Criteria
Agents must demonstrate trustworthiness to earn higher autonomy
Performance
- •Sustained accuracy over defined evaluation period
- •Reliability metrics meet or exceed thresholds
- •Consistent quality of outputs and decisions
Security Validation
- •Passes security audit at current level
- •Penetration testing shows no exploitable vulnerabilities
- •Adversarial testing confirms robustness
Business Value
- •Measurable ROI or outcome improvement
- •Documented business impact
- •Stakeholder satisfaction metrics
Incident Record
- •No major errors or incidents at prior level
- •Minor incidents properly handled and documented
- •Root cause analysis completed for any issues
Governance Sign-off
- •Explicit approval from designated authority
- •Documented risk acceptance
- •Updated policies and procedures in place
Important Note
Agents can also be DEMOTED if they fail to maintain these criteria or if incidents occur at their current level. This is a key differentiator that ensures continuous trust verification.
How ATF Relates to Other Frameworks
MAESTRO
MAESTRO models threats across 7 layers; ATF provides the governance controls to address them.
OWASP Top 10 for Agentic Applications
Identifies threats; ATF provides controls to mitigate them.
NIST 800-207
Defines Zero Trust principles; ATF applies them specifically to AI agents.
AWS Agentic AI Security Scoping Matrix
ATF maturity levels align to AWS Scopes 1-4.
Why ATF Matters Now
The Enterprise AI Dilemma
Organizations face competing pressures:
Board mandate
"We need to adopt AI to stay competitive"
Security mandate
"We can't deploy systems we can't govern"
Compliance mandate
"We must demonstrate control and auditability"
Traditional approaches force a choice: move fast with AI (and accept risk) or move carefully (and fall behind). ATF resolves this tension by providing a structured path to AI adoption with security built in from the start.
The Zero Trust Bonus
Here's what many organizations don't realize: implementing ATF for AI agents accelerates your broader Zero Trust journey.
The same principles, patterns, and infrastructure that govern AI agents apply to human access:
- Continuous verification
- Least privilege access
- Microsegmentation
- Assume breach mentality
Organizations that implement ATF for their AI agents often find they've built 60-70% of the infrastructure needed for comprehensive Zero Trust architecture.
Compliance Alignment
ATF maps directly to existing compliance frameworks:
| Framework | ATF Alignment |
|---|---|
| CSA AICM | IAM, DSP, LOG, IVS, SEF, MDS, AIS, GRC, STA, A&A domains |
| NIST AI RMF | GOVERN, MAP, MEASURE, MANAGE functions |
| SOC 2 | CC6.1-CC6.7 (access), CC7.2-CC7.4 (monitoring/incident) |
| ISO/IEC 42001 | A.2-A.10 (AI policy, lifecycle, data, oversight, continuous improvement) |
| ISO/IEC 27001 | A.5.15-A.8.32 (access, logging, data, segmentation, incident) |
| EU AI Act | Articles 9, 10, 12, 16, 20, 26, 62 |
Getting Started
Read the Specification
The complete ATF specification including detailed requirements and implementation guidance.
View on GitHub→Read the CSA Overview
A comprehensive overview published on the Cloud Security Alliance blog.
Read the Blog Post→Frequently Asked Questions
Common questions about the Agentic Trust Framework, its relationship to other standards, and how to get started.
Origins
ATF builds on concepts introduced in Agentic AI + Zero Trust: A Guide for Business Leaders (September 2025), featuring a foreword by John Kindervag, creator of Zero Trust.
Author: Josh Woodruff
Organization: MassiveScale.AI
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Published as an open specification under Creative Commons licensing. The canonical specification is maintained on GitHub at github.com/massivescale-ai/agentic-trust-framework.
Stay updated on ATF
Framework updates, implementation guides, and community news.