Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Five Eyes intelligence alliance warns agentic AI poses growing cybersecurity threat

Security agencies from five nations urge organisations to treat autonomous AI systems as a significant expansion of their attack surface

Defused News Writer profile image
by Defused News Writer
Five Eyes intelligence alliance warns agentic AI poses growing cybersecurity threat
Photo by Max Bender / Unsplash

The intelligence agencies of all five Five Eyes nations have jointly warned that the rapid deployment of agentic artificial intelligence systems poses serious cybersecurity risks, urging organisations to slow their rollout and treat autonomous AI as a fundamental expansion of their vulnerability to attack.

The guidance, published simultaneously by signals intelligence and cybersecurity agencies in Australia, the United States, Canada, New Zealand and the United Kingdom, represents one of the most significant coordinated interventions by western security establishments on AI risk to date.

Agentic AI refers to systems that can act autonomously, making decisions, executing tasks and interacting with other software without continuous human direction.

The technology is increasingly present in critical infrastructure and defence roles, the agencies said, creating what they described as an interconnected attack surface that malicious actors can exploit.

The document catalogues 23 distinct risks and sets out more than 100 best practices, reflecting the breadth of concern across the alliance.

To illustrate how things can go wrong, the guide reproduces scenarios in which agents granted overly broad permissions carry out unintended actions across connected systems.

In one example, an agent instructed to apply a security patch and clean up firewall logs cascades through linked infrastructure in ways its operators did not anticipate, a failure mode that emerges when other tools come to implicitly trust an agent's outputs.

The Five Eyes alliance, comprising Australia, Canada, New Zealand, the United Kingdom and the United States, has traditionally focused its public guidance on state-sponsored hacking, ransomware and espionage.

The decision to issue joint guidance specifically on agentic AI signals that the alliance views autonomous systems as a strategic-level concern rather than a niche technical issue.

The agencies called on vendors to make their products fail-safe by default, meaning systems should shut down or limit their actions when encountering unexpected conditions rather than pressing ahead autonomously.

Security teams were urged to develop threat intelligence specifically tailored to agentic AI, a discipline that barely exists in most organisations today.

The guidance recommends deploying agents incrementally, starting with low-risk tasks and continuously assessing evolving threats before expanding their responsibilities.

The document's concluding message was blunt: organisations must assume that agentic AI systems may behave unexpectedly.

Strong governance, explicit accountability, rigorous monitoring and human oversight are essential prerequisites rather than optional safeguards, the agencies said.

The warning arrives as businesses across multiple sectors are racing to deploy autonomous AI agents for tasks ranging from customer service to software development, supply chain management and cybersecurity itself, often with limited understanding of the cascading risks those systems can introduce.

The recap

  • Five Eyes publish joint guidance on agentic AI risks and adoption
  • Guide lists 23 risks and more than 100 best practices
  • Recommends incremental deployment starting with clearly defined low‑risk tasks
Defused News Writer profile image
by Defused News Writer

Explore stories