Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

AI agent vulnerabilities seen as greater threat than human misuse, study finds

Keyfactor survey reveals widespread concern over identity and governance gaps in autonomous systems

Defused News Writer profile image
by Defused News Writer
AI agent vulnerabilities seen as greater threat than human misuse, study finds
Photo by GuerrillaBuzz / Unsplash

Nearly seven in ten cybersecurity professionals believe vulnerabilities in artificial intelligence agents and autonomous systems pose a greater threat than malicious human actors, according to a study by digital identity security company Keyfactor.

The Digital Trust Digest: AI Identity Edition, based on a survey of 450 cybersecurity professionals in North America and Europe, found that 69% of respondents ranked AI-agent vulnerabilities as the more serious risk.

A further 86% said AI agents cannot be trusted without unique, dynamic digital identities, while 85% predicted that such identities would become as widespread as those for humans and machines within five years.

Jordan Rackie, chief executive of Keyfactor, said, “As businesses race to deploy autonomous AI systems, the security infrastructure to protect them is falling dangerously behind.”

Despite the concern, only half of respondents said their security teams had governance frameworks in place for agentic AI systems. Just 28% believed they could prevent a rogue AI agent from causing damage, and 55% of security leaders said their executive teams were not treating agentic AI risks with sufficient seriousness.

On software supply chain risk, 68% of organisations admitted lacking full visibility or governance over AI-generated code contributions.

Ellen Boehm, senior vice-president of Internet of Things and AI Identity Innovation at Keyfactor, said, “Vibe coding offers tremendous benefits for DevSecOps teams, but also significant risk if not secured appropriately.”

The survey was conducted by research firm Wakefield Research and targeted companies with at least 1,000 employees. Keyfactor has published the full findings on its website.

The Recap

  • Keyfactor finds two-thirds view AI agents as larger security risk.
  • Eighty-six percent say unique dynamic identities are required.
  • Report cites survey of 450 cybersecurity professionals in North America.
Defused News Writer profile image
by Defused News Writer

Latest posts