Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

China's agentic AI regulations are the most detailed in the world, and the West has nothing comparable

Beijing's draft framework for autonomous software agents tackles questions about decision-making boundaries, human override rights and sector-specific standards that neither the EU's AI Act nor any US legislation has addressed at this level of specificity

Defused News Writer profile image
by Defused News Writer
China's agentic AI regulations are the most detailed in the world, and the West has nothing comparable
Photo by Victor He / Unsplash

While Washington debates whether to regulate AI at all and Brussels refines enforcement of a framework designed primarily for large language models, Beijing has quietly published what may be the most comprehensive regulatory blueprint for the next phase of artificial intelligence: autonomous software agents that can take actions in the real world without being told to do so step by step.

The draft regulations, jointly released on 8 May by the Cyberspace Administration of China, the National Development and Reform Commission and the Ministry of Industry and Information Technology, mark the first time any major government has treated agentic AI as a distinct category requiring its own governance framework rather than an extension of existing rules for chatbots and generative models.

The distinction matters because agents are fundamentally different from the AI systems that current regulations were designed to govern. A chatbot generates text in response to a prompt.

An agent can browse the web, write and execute code, send emails, book appointments, make purchases and interact with other software systems autonomously.

The risks are correspondingly different: a chatbot that hallucinates produces a wrong answer; an agent that hallucinates can take a wrong action, one that may be difficult or impossible to reverse.

Beijing's draft addresses this directly by requiring developers to establish explicit decision-making boundaries for their agents, distinguishing between three tiers of autonomy: decisions that must be made by the user, decisions that require user authorisation before the agent proceeds, and decisions the agent is permitted to take autonomously.

For the third category, the draft mandates that users retain both the right to know what decisions the agent has made and the final decision-making power to override them.

The framework imposes a tiered regulatory structure based on risk. Agents operating in sensitive sectors, including healthcare, transportation, media, public safety, financial risk assessment and government procurement, will face mandatory filing requirements, testing, certification and the possibility of product recalls.

Lower-risk applications, such as office productivity tools and entertainment, will be governed through lighter-touch mechanisms including self-assessment, platform governance and industry self-regulation.

The scope of applications Beijing envisages for agents is striking in its specificity. The draft lists tasks including marking homework, analysing medical images, evaluating employee performance and recommending promotions, aiding disaster relief operations, and managing the entire bidding and tendering process for government contracts.

Each of these carries obvious risks if an autonomous system makes errors or operates beyond its intended scope, and the draft's insistence on defined boundaries and human override reflects a recognition that the technology's capabilities are advancing faster than the institutional capacity to supervise it.

The regulations also address multi-agent systems, scenarios in which multiple AI agents interact with each other to complete complex tasks, a rapidly growing area of development that raises questions about accountability when no single agent is responsible for an outcome.

China's approach stands in contrast to the regulatory landscape in the West. The EU's AI Act, which entered force in stages through 2025 and 2026, classifies AI systems by risk level but was drafted before agentic AI emerged as a mainstream capability and does not specifically address autonomous decision-making by software agents.

The United States has no federal AI legislation of any kind, and the executive orders issued by both the Biden and Trump administrations focused on model safety testing and export controls rather than the behaviour of deployed agents.

The draft is open for public comment and is expected to take effect later this year. Its emergence alongside China's aggressive investment in domestic AI capabilities, including DeepSeek's $7 billion fundraise and Baidu's decision to spin out its chip unit Kunlunxin for independent listing, suggests Beijing is pursuing a coordinated strategy: build the technology, fund the companies, and write the rules simultaneously.

For Western policymakers, the Chinese draft presents an uncomfortable question.

The country that is most often criticised for its approach to technology governance has produced the most detailed and forward-looking regulatory framework for the AI capability that is most likely to affect how businesses and governments operate over the next decade.

Whether the rules will be enforced as written, and whether China's model of state-directed innovation can coexist with meaningful user protections, remains to be seen. But the framework itself is more advanced than anything the EU or the US has proposed.

The recap

  • China's Cyberspace Administration published draft rules on AI agents.
  • Thailand approved TikTok datacenter project worth ฿842 billion.
  • The draft urges participation in international fora to develop standards.
Defused News Writer profile image
by Defused News Writer

Explore stories