Over the past few months, a recurring theme has emerged in my conversations with enterprise architects and CxOs across industries: “how do we prepare for the identity explosion that autonomous systems are bringing”.
As organizations begin deploying multi-agent systems (MAS) — collections of AI agents collaborating across environments — the familiar boundaries of Identity and Access Management (IAM) are being tested. Our IAM foundations were built around humans and static services. In contrast, non-human identities (NHIs) — the agents themselves — are transient, autonomous, and capable of making complex decisions without direct human oversight.
Many of my peers in the industry are already seeing the cracks. CxOs express growing concern about compliance and auditability: “Who authorized that action if no human clicked approve?” , “Who’s accountable when an agent takes an action no human explicitly approved”. Enterprise architects talk about the operational strain of managing thousands of short-lived agent credentials — each spun up dynamically, each needing verifiable provenance and revocation. Security leads worry about a new kind of “shadow identity” risk, where agents operate outside the current IAM visibility model.
Why Traditional IAM Architectures are not suitable for Agentic Systems
Identity Persistence vs. Agent Ephemerality:
Conventional IAM systems rely on static or semi-persistent identities (users, service accounts, API keys). Agentic systems operate with ephemeral, rapidly instantiated agents whose lifecycles may last seconds. IAM must evolve toward ephemeral credential issuance, context-bound authentication, and automated revocation tied to runtime telemetry and agent state.
Static Policy Models vs. Adaptive Agent Behavior:
Role- and attribute-based access control (RBAC/ABAC) frameworks assume stable roles and predictable intent. Agentic AI introduces goal drift and behavioral evolution, requiring adaptive authorization models driven by continuous policy evaluation, reinforcement signals, and runtime behavioral baselining.
Opaque Audit Trails vs. Cryptographically Verifiable Provenance:
Traditional logging mechanisms cannot reconstruct complex, multi-agent decision chains. Future IAM must embed verifiable provenance — linking every action to a unique agent identity, signed attestation, and timestamp — enabling non-repudiation, forensic replay, and accountability across distributed agent networks.
Static Privilege Boundaries vs. Autonomous Escalation:
Agents can probe environments and autonomously grant or delegate privileges via exposed APIs or inter-agent collaboration. This necessitates real-time privilege attestation, continuous risk scoring, and collusion detectionmechanisms to enforce least privilege dynamically.
Human-Centric Trust Models vs. Machine-Driven Collaboration:
Current IAM protocols (OAuth2, OIDC, SAML) were designed for human–service or service–service trust. In multi-agent ecosystems, we need machine-to-machine trust fabrics using Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), mutual TLS, and zero-trust inter-agent authorization to maintain integrity across autonomous communication channels.

Figure: Key priorities for managing NHIs in a MAS
Recent Research
Recent research is formalizing the standards required for this shift, characterizing the current period as the Protocol-Oriented Interoperability phase (2024–2025). Addressing the delegation challenge, the IETF published a draft in May 2025 for an OAuth 2.0 Extension: On-Behalf-Of User Authorization for AI Agents. This extension introduces parameters like requested_actor and actor_token to authenticate the agent and document the explicit delegation chain in access tokens. Concurrently, protocols like Agent-to-Agent (A2A) for peer communication and the Model Context Protocol (MCP) for secure tool invocation are maturing. Furthermore, evaluating the ontological robustness of agents is being standardized through frameworks like Agent Identity Evals (AIE), which measure stability properties such as continuity, consistency, and recovery.
Looking ahead
I see this as a challenge but also a great opportunity, for us security architects: we need to reimagine identity from first principles — designing for autonomous, adaptive, non-human actors. This isn’t about extending old IAM models; it’s about building new trust fabrics grounded in cryptographic provenance, dynamic intent, and zero-trust collaboration. The architectures we design today will determine not only how securely these agents operate, but how trust itself is represented, delegated, and enforced in the digital ecosystems of the future.
As enterprises and societies and our civilization eventually grows increasingly dependent on intelligent systems, identity becomes the new fabric of trust. When machines act alongside us, the question isn’t just how we secure them — but how we preserve trust, accountability and intent in a world where human and machine agency converge….isn’t it?




