Rethinking Identity in the Age of Multi-Agent Systems

Over the past few months, a recurring theme has emerged in my conversations with enterprise architects and CxOs across industries: “how do we prepare for the identity explosion that autonomous systems are bringing”.

As organizations begin deploying multi-agent systems (MAS) — collections of AI agents collaborating across environments — the familiar boundaries of Identity and Access Management (IAM) are being tested. Our IAM foundations were built around humans and static services. In contrast, non-human identities (NHIs) — the agents themselves — are transient, autonomous, and capable of making complex decisions without direct human oversight.

Many of my peers in the industry are already seeing the cracks. CxOs express growing concern about compliance and auditability: “Who authorized that action if no human clicked approve?” , “Who’s accountable when an agent takes an action no human explicitly approved”. Enterprise architects talk about the operational strain of managing thousands of short-lived agent credentials — each spun up dynamically, each needing verifiable provenance and revocation. Security leads worry about a new kind of “shadow identity” risk, where agents operate outside the current IAM visibility model.

Why Traditional IAM Architectures are not suitable for Agentic Systems

Identity Persistence vs. Agent Ephemerality:
Conventional IAM systems rely on static or semi-persistent identities (users, service accounts, API keys). Agentic systems operate with ephemeral, rapidly instantiated agents whose lifecycles may last seconds. IAM must evolve toward ephemeral credential issuance, context-bound authentication, and automated revocation tied to runtime telemetry and agent state.

Static Policy Models vs. Adaptive Agent Behavior:
Role- and attribute-based access control (RBAC/ABAC) frameworks assume stable roles and predictable intent. Agentic AI introduces goal drift and behavioral evolution, requiring adaptive authorization models driven by continuous policy evaluation, reinforcement signals, and runtime behavioral baselining.

Opaque Audit Trails vs. Cryptographically Verifiable Provenance:
Traditional logging mechanisms cannot reconstruct complex, multi-agent decision chains. Future IAM must embed verifiable provenance — linking every action to a unique agent identity, signed attestation, and timestamp — enabling non-repudiation, forensic replay, and accountability across distributed agent networks.

Static Privilege Boundaries vs. Autonomous Escalation:
Agents can probe environments and autonomously grant or delegate privileges via exposed APIs or inter-agent collaboration. This necessitates real-time privilege attestation, continuous risk scoring, and collusion detectionmechanisms to enforce least privilege dynamically.

Human-Centric Trust Models vs. Machine-Driven Collaboration:
Current IAM protocols (OAuth2, OIDC, SAML) were designed for human–service or service–service trust. In multi-agent ecosystems, we need machine-to-machine trust fabrics using Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), mutual TLS, and zero-trust inter-agent authorization to maintain integrity across autonomous communication channels.

Figure: Key priorities for managing NHIs in a MAS

Recent Research

Recent research is formalizing the standards required for this shift, characterizing the current period as the Protocol-Oriented Interoperability phase (2024–2025). Addressing the delegation challenge, the IETF published a draft in May 2025 for an OAuth 2.0 Extension: On-Behalf-Of User Authorization for AI Agents. This extension introduces parameters like requested_actor and actor_token to authenticate the agent and document the explicit delegation chain in access tokens. Concurrently, protocols like Agent-to-Agent (A2A) for peer communication and the Model Context Protocol (MCP) for secure tool invocation are maturing. Furthermore, evaluating the ontological robustness of agents is being standardized through frameworks like Agent Identity Evals (AIE), which measure stability properties such as continuity, consistency, and recovery.

Looking ahead

I see this as a challenge but also a great opportunity, for us security architects: we need to reimagine identity from first principles — designing for autonomous, adaptive, non-human actors. This isn’t about extending old IAM models; it’s about building new trust fabrics grounded in cryptographic provenance, dynamic intent, and zero-trust collaboration. The architectures we design today will determine not only how securely these agents operate, but how trust itself is represented, delegated, and enforced in the digital ecosystems of the future.

As enterprises and societies and our civilization eventually grows increasingly dependent on intelligent systems, identity becomes the new fabric of trust. When machines act alongside us, the question isn’t just how we secure them — but how we preserve trust, accountability and intent in a world where human and machine agency converge….isn’t it?

Securely store API keys in R scripts with the “secret” package

When we use an API key to access a secure service, through R, or when we need to authenticate in order to access a protected database, we need to store this sensitive information in our R code somewhere. This typical practice is to include those keys as strings in the R code itself — but as you guessed it, it’s not secure. By doing that, we are also storing our private keys and passwords in plain-text on our hard drive somewhere. And as most of us use Github to collaborate on our code, we will also end up, unknowingly, including those keys in a public repo.

Now there is a solution to this – its the “secret” package developed by Gábor Csárdi and Andrie de Vries for R. This package integrates with OpenSSH, providing R functions that allow us to create a vault to keys on our local hard drive, and also define trusted users who can access those keys, and then include encrypted keys in R scripts or packages that can only be decrypted by the person who wrote the code, or by people he/she trusts.

Here is the presentation by Andrie de Vries at useR!2017, where they demoed this package, and here is the package itself.

 

The importance of security in IoT

Wikipedia’s definition of IoT is:

The Internet of Things (IoT) is the network of physical objects or “things” embedded with electronics, software, sensors and connectivity to enable it to achieve greater value and service by exchanging data with the manufacturer, operator and/or other connected devices. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure.

To put it in even simpler words, IoT depicts a world where objects communicate with each other, and the same objects with humans too, seamlessly.

IoT is such an important area of focus today, that there is also a Search Engine for IoT found here which provides a geographical index of where things (IoT) are, who owns them, and how and why they are used.

The below graph (Courtesy Verizon DBIR 2015) shows the scale of growth of IoT devices in the next five years.

B2B Internet of Things connections, 2011 to 2020 (forecast)
B2B Internet of Things connections, 2011 to 2020 (forecast)

There was this funny definition of “Big Data” that was trending on Twitter recently, and I found it to be quite true. Big Data has become one of the most popular terms used by IT professionals, Businesses, Product Companies and individuals who have anything to do with data or information. But only few actually understand and use this concept and the relevant tools, in the right places. Product companies have been using “Big Data” as a key Marketing jargon.

Similarly, “IoT” is becoming one of the widely used terms in the Tech. and Non-Tech. industry. There are conferences held on IoT, there are marketing initiatives running in full swing in this domain, and every company is in a rush to introduce products in this category.

Following Infographic captures the already prevalent impact of IoT on our lives (Image source: http://cdn.xenlife.com.au):

Source: http://cdn.xenlife.com.au
Impact of IoT in our daily lives

But very few people, companies and institutions are actually spending time and effort in understanding the big picture, and studying and discussing the larger implications of IoT on the industry, our daily lives, and our society as a whole, and building products and solutions around them.

The International Journal of Computer Trends and Technology is one of such institutions which has been doing some research in this area. Their paper An Algorithmic Framework Security Model for Internet of Things is a definite read, and it describes one of the approaches that can be used to understand and implement IoT technologies without affecting security, privacy and integrity of information.

These lines set the context for the whole situation, and the paper:

The biggest role researchers are obliged to undertake is to find and advance the best algorithms for enhancing secure use of Internet of Things especially cutting across different application environments.

The basis of coming up with a security model for Internet of things (IoT) is on the understanding of the source of concern from the functionality modalities of Internet of Things. The functional modalities hereby refer to the different application environments where IoT are applicable, such as health, agriculture, retail, transport and communication, the environments both virtual and physical as well as many other potential areas of application depending on classifications employed at the point of discussions at hand.

Given also the possibilities that IoT have, to extend beyond present applications , especially enabled by emerging technologies in mobile and wireless computing, the scope of concerns from such a web of connectivity, should not be focused in defined areas but should have a broader scope.

The paper handles this issue in the following order:
  1. A world with IoT in place
  2. Problems with the situation
  3. Where should security start – the modalities involved – Lampson’s Access Matrix
  4. Augmented Approach Model for IoT Security – theoretical design

AAM is a good place to start, however, area that will require further research is the way the interaction between the augmented IoT applications can be controlled, because the code from numerous and possibly untrusted users and applications will be placed in the same security domain, which raises security and integrity concerns.

IoT Security is a vast topic, and this is just tip of the iceberg, with lot of nuances still unknown to us. I shall be writing more about this topic. There is no doubt in the potential of IoT in our lives, and it is going to be one of humanity’s biggest creations in this century. For us to realise its true potential, we must learn from our mistakes from the last two decades of developing software without considering security as a design principle; the numerous Cyber Security Breaches in the recent times and Incident Reports are indicators of the impact of this lack of augmented approach. But the repercussions of security compromises in IoT technologies can be far reaching, as IoT touches various levels of our social, economic and political lives.

Here is a picture showing one such scenario (Image source: http://spectrum.ieee.org/)

IoT: We can’t hide

IoT is the future of technology beyond 2020, and its one of key tools to realize United Nations Millennium Development Goals, and building security principles into IoT technologies is going to be instrumental to its use to humanity.

Further Reading:

Title Image courtesy: http://www.cmswire.com

Microsoft to end Patch Tuesday fixes

Microsoft recently showed, during their Ignite 2015 conference, some of the new security mechanisms embedded in Windows 10, which also means a change in the software update cycles, reports @iainthomson with The Register.

Terry Myerson, Head of Windows Operating System division, took a shot at Google’s approach (or lack of) in his keynote last week:

Google takes no responsibility to update customer devices, and refuses to take responsibility to update their devices, leaving end users and businesses increasingly exposed every day they use an Android device.

Google ships a big pile of [pause for effect] code, with no commitment to update your device.

The article reports:

Myerson promised that with the new version of Windows, Microsoft will release security updates to PCs, tablets and phones 24/7, as well as pushing other software “innovations,” effectively putting an end to the need for a Patch Tuesday once a month.

And,

On the data protection side, Brad Anderson, veep of enterprise client and mobility, showed off a new feature in preview builds today: Microsoft’s Advanced Threat Analytics (ATA). This tries to sense the presence of malware in a network, and locks down apps to prevent sensitive data being copied within a device…

Using Azure, administrators can choose to embed metadata in files so that managers can see who read what document, when, and where from. If a particular user is trying to access files they shouldn’t, an alert system will let the IT manager know.

Well, controls like these have been around for sometime, but most of them implemented through third party products, but its interesting to see Microsoft building these capabilities within the Operating system itself.

Microsoft’s decision to release patches whenever they are ready or available, is definitely a move in the right direction, and is in line with what Apple has been doing with Mac OS for quite sometime.

Title Image Courtesy: blog.kaspersky.com