Ping Identity has published new research, commissioned from KuppingerCole Analysts, warning that AI agents are moving into production faster than many enterprise identity systems can govern them.
The core issue is simple, but serious. Traditional identity and access management was built around human users, static access decisions, and predictable application flows. AI agents behave differently. They can act continuously, call tools, chain tasks, spawn sub-agents, and combine permissions in ways that may be technically allowed but operationally unsafe.
Ping's announcement describes a critical failure mode for enterprise buyers: an agent may use individually legitimate permissions in unintended combinations, creating actions that bypass established controls or become difficult to trace. In agentic systems, access alone does not equal control.
Why this matters now
The market conversation around AI agents has moved quickly from capability to deployment. Enterprises are no longer only asking whether an agent can complete a workflow. They are asking whether it can be trusted with real systems, real data, and real consequences.
Ping says the shift is from managing identity to controlling how identities act across systems, data, and workflows. That distinction matters because an agent is usually acting on behalf of a person, team, customer, or business process. The security question is not just, "is this agent authenticated?" It is, "should this agent, representing this user, be allowed to take this action, in this context, right now?"
The company's research highlights specific risks including delegation opacity, sub-agent spawning, context leakage across systems, permission inheritance, and the limits of OAuth and OIDC models that assume a human decision-maker is present at key moments.
For enterprise AI programmes, these are not theoretical edge cases. They go directly to procurement, compliance, insurance, audit, legal responsibility, and operational resilience.
Runtime authorisation is becoming a buying criterion
Ping's proposed answer is runtime identity: treating agents as first-class identities, tying their actions to the humans or organisations they represent, and checking fine-grained policy at the moment each action is attempted.
That is different from granting an agent a broad credential and trusting it to behave. Runtime authorisation asks for continuous evaluation. The represented user, the acting agent, the downstream tool, the requested action, the resource, the business policy, and the current context all matter.
This is especially important as agent architectures become more distributed. A simple chatbot may only answer questions. A production agent may retrieve customer records, update CRM fields, trigger payments, generate contracts, create support tickets, or hand work to another agent. Each of those tool calls needs a policy decision, not just a token.
Ping's separate Google Cloud Agent Gateway integration blog makes the same point at the infrastructure level. It describes agent and tool traffic as a managed path where authentication, context-based authorisation, MCP tool policies, inspection, logging, and trace IDs can be applied before requests reach downstream agents, MCP servers, or tools.
What enterprise buyers should ask suppliers
For buyers, the practical lesson is to include agent identity and authorisation in early vendor evaluation. If a supplier's agent can act across systems, the procurement conversation needs to cover how those actions are constrained, observed, and reviewed.
Useful questions include:
- Agent identity: is each agent registered as a distinct identity, or is it using shared service credentials?
- Delegation: how does the platform prove which user, team, customer, or process the agent is acting on behalf of?
- Policy checks: are tool calls evaluated at runtime, or only controlled by permissions set during setup?
- High-risk actions: when does the agent need human approval before taking action?
- Traceability: can the organisation review prompts, context, tool calls, approvals, denials, and outcomes after the event?
- Sub-agent behaviour: can delegated or chained agent activity be traced end to end?
The answers will matter as much as model quality. An agent that performs well in a demo can still be a poor enterprise fit if it cannot be governed in production.
What this means for suppliers
For AI agent vendors, this is a clear signal that enterprise-grade security is moving beyond standard login and role-based access. Buyers will increasingly expect evidence of identity design, permission scoping, runtime controls, auditability, human escalation, and integration with existing IAM and security architecture.
That creates opportunity for agent platforms, infrastructure providers, security vendors, and implementation partners. The companies that help enterprises deploy agents safely will be central to the next phase of adoption.
It also raises the commercial bar. Suppliers will need to explain not only what their agents can do, but how they are prevented from doing the wrong thing.
The Agentic Expo angle
Ping Identity's research reinforces a pattern now visible across the agentic AI market: production adoption depends on governance, infrastructure, and trust as much as raw capability.
Agentic Expo is being built for that full buying conversation. Enterprise leaders need to compare market-ready agents, but they also need to meet the identity, security, orchestration, governance, and implementation specialists that make those agents usable at scale.
The next phase of agentic AI will not be won by the most impressive demo alone. It will be won by systems that can act, be controlled, be audited, and be trusted.