Identity & Access Management for Agentic AI — Our Technical Report Is Now Published

Over the past three+ months, my co‑author Angelika Steinacker and I have been deep in the weeds researching, brainstorming, threat‑modeling, and refining what a secure identity and access architecture should look like in the era of agentic AI. Today, I’m excited to share that our technical paper Governing AI Agents – An Agent-Aware IAM Framework, is now publicly available.

👉 Read it on ResearchGate: https://www.researchgate.net/publication/400396082_Governing_AI_Agents_An_Agent-Aware_IAM_Framework

Why we wrote this

Agentic AI systems introduce Autonomous Non‑Human Identities (A‑NHIs)—entities that operate with autonomy, make decisions at machine speed, and collaborate across applications, APIs, and other agents. These behaviors fall far outside what traditional IAM was designed to handle.

Across our research, we observed consistent gaps in current IAM systems:

  • Reliance on static credentials
  • Lack of fine‑grained, purpose‑aligned authorization
  • Limited visibility into multi‑hop agent delegation chains
  • No robust way to establish dynamic cross‑domain trust
  • Insufficient mechanisms for end‑to‑end provenance

What this paper contributes

We propose an Agent‑Aware IAM model built on extending and fully implementing the Identity Fabric. The result is a four‑layer deployment architecture designed specifically for agentic environments:

  1. Identity Foundation — verifiable agent identities, ephemeral issuance, ownership, and purpose metadata
  2. Trust & Federation — dynamic cross‑domain trust using VCs, DIDs, token exchange, and trust brokers
  3. Security & Privacy Enforcement — intent‑aligned authorization, JIT access, privacy safeguards, and drift detection
  4. Lifecycle & Observability — full provenance: agent → token → task → data → decision

We illustrate these layers through a credit‑scoring + order‑management multi‑agent system, showing how secure, audited flows can be constructed end‑to‑end.

A collaboration worth highlighting

This work came from months of intense technical deep‑dives, design sessions, and constant iteration. Collaborating with my co‑author Angelika Steinacker made this intellectually exciting and extremely rewarding — discussions ranged from identity proofs and decentralized trust to model attestation, SBOM linkage, and federated governance.

Looking ahead

As enterprises move toward multi‑agent ecosystems, we believe trust—not raw capability—will define what can scale safely. Identity, policy, and provenance must become the control plane for autonomous digital workflows.

As I mentioned in my previous blog post Rethinking Identity in the Age of Multi-Agent Systems, this is a very important field of study, within the Agentic AI Systems realm. And there will be more work we need to do, as Security Architects, to ensure these Agentic systems operate within boundaries we set for them.

Thank you to everyone who encouraged this work along the way.
I hope this Paper serves as a useful reference for Enterprise Security Architects, CISOs, IAM teams, and AI governance practitioners navigating this emerging space.

Rethinking Identity in the Age of Multi-Agent Systems

Over the past few months, a recurring theme has emerged in my conversations with enterprise architects and CxOs across industries: “how do we prepare for the identity explosion that autonomous systems are bringing”.

As organizations begin deploying multi-agent systems (MAS) — collections of AI agents collaborating across environments — the familiar boundaries of Identity and Access Management (IAM) are being tested. Our IAM foundations were built around humans and static services. In contrast, non-human identities (NHIs) — the agents themselves — are transient, autonomous, and capable of making complex decisions without direct human oversight.

Many of my peers in the industry are already seeing the cracks. CxOs express growing concern about compliance and auditability: “Who authorized that action if no human clicked approve?” , “Who’s accountable when an agent takes an action no human explicitly approved”. Enterprise architects talk about the operational strain of managing thousands of short-lived agent credentials — each spun up dynamically, each needing verifiable provenance and revocation. Security leads worry about a new kind of “shadow identity” risk, where agents operate outside the current IAM visibility model.

Why Traditional IAM Architectures are not suitable for Agentic Systems

Identity Persistence vs. Agent Ephemerality:
Conventional IAM systems rely on static or semi-persistent identities (users, service accounts, API keys). Agentic systems operate with ephemeral, rapidly instantiated agents whose lifecycles may last seconds. IAM must evolve toward ephemeral credential issuance, context-bound authentication, and automated revocation tied to runtime telemetry and agent state.

Static Policy Models vs. Adaptive Agent Behavior:
Role- and attribute-based access control (RBAC/ABAC) frameworks assume stable roles and predictable intent. Agentic AI introduces goal drift and behavioral evolution, requiring adaptive authorization models driven by continuous policy evaluation, reinforcement signals, and runtime behavioral baselining.

Opaque Audit Trails vs. Cryptographically Verifiable Provenance:
Traditional logging mechanisms cannot reconstruct complex, multi-agent decision chains. Future IAM must embed verifiable provenance — linking every action to a unique agent identity, signed attestation, and timestamp — enabling non-repudiation, forensic replay, and accountability across distributed agent networks.

Static Privilege Boundaries vs. Autonomous Escalation:
Agents can probe environments and autonomously grant or delegate privileges via exposed APIs or inter-agent collaboration. This necessitates real-time privilege attestation, continuous risk scoring, and collusion detectionmechanisms to enforce least privilege dynamically.

Human-Centric Trust Models vs. Machine-Driven Collaboration:
Current IAM protocols (OAuth2, OIDC, SAML) were designed for human–service or service–service trust. In multi-agent ecosystems, we need machine-to-machine trust fabrics using Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), mutual TLS, and zero-trust inter-agent authorization to maintain integrity across autonomous communication channels.

Figure: Key priorities for managing NHIs in a MAS

Recent Research

Recent research is formalizing the standards required for this shift, characterizing the current period as the Protocol-Oriented Interoperability phase (2024–2025). Addressing the delegation challenge, the IETF published a draft in May 2025 for an OAuth 2.0 Extension: On-Behalf-Of User Authorization for AI Agents. This extension introduces parameters like requested_actor and actor_token to authenticate the agent and document the explicit delegation chain in access tokens. Concurrently, protocols like Agent-to-Agent (A2A) for peer communication and the Model Context Protocol (MCP) for secure tool invocation are maturing. Furthermore, evaluating the ontological robustness of agents is being standardized through frameworks like Agent Identity Evals (AIE), which measure stability properties such as continuity, consistency, and recovery.

Looking ahead

I see this as a challenge but also a great opportunity, for us security architects: we need to reimagine identity from first principles — designing for autonomous, adaptive, non-human actors. This isn’t about extending old IAM models; it’s about building new trust fabrics grounded in cryptographic provenance, dynamic intent, and zero-trust collaboration. The architectures we design today will determine not only how securely these agents operate, but how trust itself is represented, delegated, and enforced in the digital ecosystems of the future.

As enterprises and societies and our civilization eventually grows increasingly dependent on intelligent systems, identity becomes the new fabric of trust. When machines act alongside us, the question isn’t just how we secure them — but how we preserve trust, accountability and intent in a world where human and machine agency converge….isn’t it?