Google's latest research into prompt injection on the public web is a useful reality check for any organisation planning to connect AI agents to browsers, inboxes, documents, CRMs, finance systems, procurement workflows, or customer data.
The finding is simple: malicious indirect prompt injection attempts are already appearing in the wild, and Google says they are increasing. According to reporting on Google's analysis, researchers scanned Common Crawl snapshots for prompt injection patterns, then used Gemini and human review to reduce false positives. They found prank instructions, anti-crawling prompts, SEO manipulation attempts, and malicious prompts designed for exfiltration or destruction.
Google's researchers said they did not observe large volumes of advanced attacks at this stage, but the trend matters. SecurityWeek reports that malicious prompt injection attempts rose by 32% between November 2025 and February 2026, with Google warning that both scale and sophistication are expected to increase.
For enterprise buyers, this shifts prompt injection from a theoretical AI safety issue into an operational security question. If an agent can browse the web, read emails, ingest files, call APIs, update systems, or trigger workflows, external content becomes part of the attack surface.
Why indirect prompt injection matters
Direct prompt injection is familiar: a user tries to persuade a model to ignore its rules. Indirect prompt injection is more subtle. The malicious instruction is planted inside third-party content that the AI system later reads, such as a web page, email, document, support ticket, code repository, or knowledge-base article.
That matters because agentic systems are designed to act on information from the outside world. A traditional chatbot might summarise a web page. An AI agent might read the same page, decide what to do next, write to a system of record, email a colleague, create a task, or pass data into another tool. The more useful the agent becomes, the more important its boundaries become.
The practical risk is not that every hidden instruction will succeed. Most will not. The risk is that enterprises deploy agents into real workflows before they have clear controls over what information an agent may trust, what tools it may use, what data it may expose, and when it must stop for human review.
The buyer lesson: do not buy autonomy without controls
Enterprise adoption of AI agents is moving from pilots towards production. That creates pressure to evaluate tools quickly, but prompt injection research highlights why capability demos are not enough.
Buyers should ask suppliers how their agents handle:
- Untrusted input: whether web pages, emails, documents, and third-party data are treated differently from approved internal context
- Tool permissions: which actions an agent can take, which systems it can touch, and whether permissions are scoped by role, task, and risk level
- Data loss prevention: how sensitive information is detected and blocked before it is sent to an external destination
- Audit trails: how prompts, retrieved content, tool calls, approvals, and outcomes are logged for investigation
- Human escalation: when the agent is required to ask for approval rather than continue autonomously
- Red teaming: whether the supplier tests against indirect prompt injection, data exfiltration, and tool misuse scenarios
These are not just security-team questions. They are procurement, legal, compliance, finance, and operational questions. A business that cannot explain what its agents can and cannot do will struggle to approve them for higher-value workflows.
The supplier opportunity
For AI agent suppliers, Google's research is not bad news. It is a market signal.
The companies that win enterprise trust will not be the ones that simply show the most autonomous demo. They will be the ones that can show safe autonomy: granular permissions, policy enforcement, observability, secure connectors, sandboxed execution, approval workflows, retrieval controls, and clear reporting for security and governance teams.
This is especially important for vendors selling into regulated or operationally complex sectors such as finance, healthcare, legal, manufacturing, logistics, telecoms, government, and enterprise IT. In those environments, an agent's ability to act is valuable only if the organisation can govern that action.
What enterprises should do now
The immediate response is not to stop using agents. It is to separate low-risk assistance from higher-risk autonomy.
Agents that summarise internal documents require one level of control. Agents that browse the web and update CRM records require another. Agents that can move money, change entitlements, delete files, contact customers, or approve transactions need stronger approval gates, logging, and containment.
Enterprise teams should map agent use cases by action risk, data sensitivity, and external exposure. From there, they can decide which workflows are ready for automation, which need human-in-the-loop approvals, and which should remain off limits until the security model matures.
The Agentic Expo angle
Agentic Expo exists because the enterprise AI agent market is moving beyond abstract claims. Buyers need to see working products, but they also need to interrogate the security, governance, infrastructure, and integration layers behind them.
Google's prompt injection research is a reminder that agent adoption will not be decided by model capability alone. It will be decided by trust: can the agent be monitored, constrained, audited, and deployed safely into real business processes?
That is where the next phase of the market will be won.
Sources: Google Online Security Blog; SecurityWeek; AI News.