Amazon Web Services isn't just selling AI agent tools. It's using them to replace some of the work done by employees it recently let go.
According to reporting by The Information, AWS has deployed AI agents inside its sales and business development operations, targeting functions previously handled by staff who were laid off. The agents are being used specifically in the company's channel partner programme, which covers relationships with consultants and systems integrators, including Accenture, Deloitte, and Capgemini. Tasks being automated include answering deep technical questions from customers, work that was previously handled by AWS specialists.
AWS says it is not using AI to replace employees. Current and former employees, per The Information's reporting, suggest otherwise.
The higher-in-the-stack problem
The standard line from tech companies when they automate functions is that displaced workers can move to "higher-level" work. There is genuine truth in this, particularly when it comes to genuinely low-value busy work: responding to routine sales leads, triaging inbound queries, and generating boilerplate technical documentation. AI is well-suited to all of that.
But the question being asked with increasing urgency is how much stack is left to move up to. AWS's use of agents to handle what were specialist technical roles suggests the automation is moving up faster than the jobs are.
Amazon has a track record here. Few large companies are better at finding and extracting efficiencies. That this is happening inside one of the world's biggest cloud businesses, at the same time AWS is actively selling AI agent tools to enterprise customers, is also a significant applied AI story. AWS is its own best case study.
Agents at the RSA conference
The AWS story lands alongside a broader industry conversation about what AI agents actually are and how to keep them under control. At the RSA security conference in San Francisco this week, Cisco president and chief product officer Jetu Patel used his keynote to address the security risks that come with deploying agents at scale.
Patel framed the challenge around three concerns: protecting agents from external threats, protecting systems and data from agents going rogue, and putting in place guardrails robust enough to catch problems before they become irreversible. He described the ideal security operation as an "agentic security operation centre," a standing team of defensive agents running countermeasures against attacker agents.
The concern isn't theoretical. Adversarial use of AI agents to target critical infrastructure, including hospitals, power grids, and water systems, is already being planned for by defenders. The difference now is that attacks and defences alike will operate at machine speed, well beyond what human security teams can match manually.
The multi-agent approach
Separately, a pattern is emerging among developers working with AI coding agents: rather than treating an agent as a single capable programmer, they are assigning multiple agents to play different roles simultaneously, one acting as a product manager, another as a spec writer, another as a reviewer.
The approach mirrors how software development actually works in teams, with the idea being that a "contrarian" agent pushing back on the others produces better output than any single agent working alone. Models including xAI's Grok 4.2 have been observed running this kind of multi-agent setup natively.
Whether this represents a genuine architectural advance or a workaround for current model limitations is contested. Researchers expect future models to handle task decomposition and role assignment without needing to be explicitly instructed to pretend to be different people. For now, the scaffolding is visible, and humans are still doing a lot of the directing.