


Back to resources
Securing Agentic AI: Just-in-Time Access for A2A Workflows | Britive
April 2025 / 7 min. read /

Google recently launched their A2A (Agent-to-Agent) protocol, which is potentially shaping up to be the future of how AI agents, services, workloads, containers, and more communicate with one another.
A2A enables workloads to authenticate using short-lived, verified identities. “Agent cards” describe and advertise what each service or agent can do. Tokens, keys, OAuth, and similar mechanisms are used for authentication and basic authorization.
An Agent Card is like a digital profile or resume for an AI agent. It describes what the agent can do, what data or services it needs access to, what kind of tasks or requests it accepts, and how it communicates (inputs/outputs, protocols, text, videos, images).
Think of it like API documentation but for an intelligent agent, not just a service. Other agents can use this card to discover, trust, and collaborate with the agent in a structured, secure way.
In the A2A world, agents function like users or services with permissions. Some might access sensitive data, trigger downstream systems, execute code, or change configurations.
That shifts the privileged identity management (PIM) conversation from “How do we manage human admins and service accounts?” to “How do we manage autonomous decision-makers?” And that’s an eye-opening future shift and threat landscape.
Today, most machine access is granted through non-human identities which are often static credentials with long lifespans and broad access. These are easy to misuse, difficult to audit, and typically over-provisioned. For example, when two backend microservices need to talk to each other, it’s usually handled with a pre-shared key or long-standing access token. That can be hijacked, and all those privileges will be available to the malicious player.
Shifting Access Management: From Users to Autonomous AI Agents
Just like with human users and non-human identities, giving standing privileges to AI agents is risky. A2A is well suited for just-in-time access provisioning, where permissions are granted only when the agent is actively working. You can also scope access to specific tasks and have it automatically expired afterward. This aligns perfectly with modern privileged access management (PAM) best practices to enable zero standing privileges through just-in-time access.
Another key point is auditability. With AI agents making decisions and accessing resources, PAM concerns now extend to why access was requested, who made the request (even on behalf of whom), and what context led to the decision.
For example: Agent A delegates a task to Agent B. Why? What were the inputs and what policy was triggered?
That means PAM tools need to evolve. It’s no longer just about tracing role-based access. It’s about understanding inter-agent reasoning and communication paths. This changes how we think about logs, alerting, and forensics.
Traditional PAM defines “who can do what.” With A2A, we now must define:
- Who can talk to whom
- What topics agents are allowed to discuss
- And what actions they’re allowed to suggest or approve.
Another new challenge? Privilege chaining.
Say Agent A has read access, and Agent B has write access. Separately, they seem harmless. But working together, they might unintentionally exfiltrate or alter data.
That’s privilege escalation through cooperation which is a completely new dimension for PAM to address.
Extending PAM to the Age of A2A Communication
A2A is the next-gen approach to managing machine identity. And it needs to be secured similarly to non-human identities.
Let’s say an agent needs to perform a task. We should grant temporary permission, let it act, then revoke that access just like we do with humans to enable a just-in-time, zero-standing privilege model. That also brings governance, visibility, and control into the picture, extending PAM to these AI non-human machines and services. If we don’t want humans having privileged access 24x7 when not needed, why allow the machines to have it?
Let’s review an example of an Agent to Agent use case that will auto-remediate a misconfigured cloud bucket.
- Agent A (Monitoring Agent) detects that a cloud storage bucket has been accidentally set to public.
- Agent B (Policy Agent) confirms this violates company security policy.
- Agent C (Remediation Agent) requests just-in-time permission to fix it.
- Once approved, Agent C revokes public access and locks down the bucket.
- Agent D (Audit Agent) logs the entire interaction: what happened, who did what, and why.
Agentic AI acting on behalf of users is not hypothetical anymore. With Google’s A2A, we’re watching machine identity evolve in real time from something static and fragile into something dynamic, intelligent, and even context aware.
But with this power comes risk. The same way we have learned to lock down human admin access, we now have to apply that same discipline to machines. If agents can think, decide, and act then they also need to be governed, observed, and limited, just like any privileged user.
A2A doesn’t just introduce a new communication protocol, it introduces a new security perimeter. One that’s fast moving, non-human, and policy driven.
Organizations that adapt early will gain a massive advantage in control, agility, and trust. Those that ignore it will face a new category of shadow access which they won’t even see coming.
The shift is happening. Identity is no longer just about human users, and PAM needs to keep up.