A group of national cyber security agencies has published new guidance on the careful adoption of agentic AI services, setting out the risks enterprises need to manage before autonomous agents are connected to business systems, sensitive data or critical infrastructure.
The guidance was co-authored by the Australian Signals Directorate's Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency, the US National Security Agency, the Canadian Centre for Cyber Security, New Zealand's National Cyber Security Centre and the UK's National Cyber Security Centre.
That list matters. This is not a vendor blog or a speculative market prediction. It is a coordinated signal from major cyber authorities that agentic AI is becoming operational enough, and risky enough, to require formal security planning.
What the guidance says
The agencies define agentic AI systems as AI systems that can interpret context, reason, plan, use tools and take actions to achieve goals. Compared with traditional generative AI tools, agentic systems are more likely to connect to external data, memory, software interfaces and enterprise services.
That capability is what makes agents useful. It is also what changes the risk profile.
The guidance warns that organisations should align agentic AI risks with their existing security model and risk posture, adopt agents with security in mind, and avoid giving them broad or unrestricted access, especially to sensitive data or critical systems. It also recommends starting with low-risk and non-sensitive tasks while security practices, evaluations and standards mature.
For enterprise teams, the message is practical rather than anti-AI. Agents can automate repetitive and well-defined work, but they should not be dropped into production workflows as if they were ordinary software features.
Why agents create a different security challenge
AI agents inherit the risks of large language models, including prompt injection and manipulated inputs. But they also introduce wider operational risks because they can call tools, access systems, use external data, maintain memory and chain multi-step actions together.
The agencies point to several areas that enterprise buyers should treat seriously:
- Privilege risk: agents with excessive permissions can turn a small compromise or bad instruction into a wider business incident.
- Expanded attack surface: tools, data sources, APIs, memory and external integrations all create new paths for misuse or compromise.
- Accountability gaps: autonomous actions can become difficult to trace if ownership, logging and delegated authority are not designed clearly.
- Cascading behaviour: multi-step agent workflows can create failures that propagate through connected systems.
- Immature standards: governance and evaluation methods for agentic systems are still developing.
The procurement lesson is clear: an agent's technical capability is only part of the buying decision. Security architecture, identity, permissions, auditability, monitoring and reversibility now belong in the same conversation as workflow automation and productivity.
What enterprise buyers should ask suppliers
For buyers evaluating AI agents, the guidance points towards a more disciplined set of questions. These should appear early in procurement, not after a pilot has already been built around broad access.
- What systems, data and tools can the agent access?
- Are permissions scoped to the task, user, role and risk level?
- Can high-risk actions require human approval?
- How are prompts, tool calls, decisions, approvals and outcomes logged?
- Can the agent be isolated, paused, rolled back or shut down quickly?
- How is agent behaviour tested before deployment and monitored after launch?
- Who is accountable when an agent acts on behalf of a user, team or process?
These are not just CISO questions. They affect legal, procurement, operations, finance and business unit owners because agentic AI sits between software automation and delegated decision-making.
What this means for suppliers
For AI agent vendors, the guidance raises the commercial bar in a useful way. Buyers will still care about speed, usability and measurable outcomes, but they will increasingly expect evidence that agents can be deployed safely inside existing enterprise controls.
Suppliers that can show least-privilege design, strong identity controls, audit trails, human-in-the-loop escalation, test evidence, secure integrations and clear incident response processes will have an advantage. Suppliers that rely only on impressive demos may struggle once procurement and security teams get involved.
The wider opportunity is also clear. Agent security, governance, observability, identity, runtime controls, evaluation and implementation support are becoming essential parts of the agentic AI ecosystem. The market is not just for agents. It is for the infrastructure that lets enterprises trust them.
The Agentic Expo angle
This guidance reinforces why Agentic Expo is being built as a B2B event, not a general AI showcase. Enterprise adoption will depend on buyers meeting suppliers that can answer the full deployment question: what the agent does, how it connects, how it is governed, how risk is contained and how value is proven.
The next phase of agentic AI will reward credible, production-ready companies. Security-first adoption is not a brake on the market. It is what will allow serious buyers to move from pilots into real operational use.