Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Meta researchers unveil AI framework that rewrites its own problem-solving code

"Hyperagents" combine task execution with self-modification, raising both performance and safety questions

Defused News Writer profile image
by Defused News Writer
Meta researchers unveil AI framework that rewrites its own problem-solving code

Researchers at Meta, working with collaborators at several universities, have introduced a framework for artificial intelligence systems that can rewrite and optimise their own problem-solving logic, a concept they call "hyperagents".

A paper published by the team describes how hyperagents fuse task execution and meta-level modification into a single editable program capable of invoking large language models (LLMs), external tools or learned components.

The work addresses what the researchers identify as a fundamental bottleneck in current AI self-improvement architectures, which depend on fixed, human-designed meta-agents that stall when the pace of required maintenance exceeds human capacity.

"The core limitation of handcrafted meta-agents is that they can only improve as fast as humans can design and maintain them," Jenny Zhang, a co-author of the paper, told VentureBeat.

The paper draws a distinction between the hyperagent approach and existing systems such as Sakana AI's Darwin Gödel Machine, which can improve within coding domains but struggle to transfer those gains to subjective, non-coding tasks.

Meta's researchers extended the Darwin Gödel Machine into what they call DGM-Hyperagents (DGM-H), combining open-ended evolutionary search with metacognitive self-modification so the system can alter its own improvement mechanism rather than just its task-level outputs.

"Hyperagents are not just learning how to solve the given tasks better, but also learning how to improve," Zhang said.

In experiments, hyperagents matched the original Darwin Gödel Machine on the Polyglot coding benchmark and outperformed open-source baselines in paper review and robotics tasks.

On an unseen Olympiad-level mathematics grading task, the system achieved an improvement metric of 0.630 after 50 iterations, compared with a flat 0.0 for classic Darwin Gödel Machine baselines.

The paper notes that during training the hyperagents autonomously built their own memory tools, performance trackers and compute-aware planning systems without human instruction.

The researchers have released the code under a non-commercial licence.

The authors acknowledge significant safety trade-offs accompanying systems that modify their own code.

Zhang said the guiding principle is to separate experimentation from deployment, allowing the agent to explore and improve within a controlled sandbox while enforcing resource limits and restricted external access as practical safeguards.

The paper also warns of "evaluation gaming", where self-modifying systems optimise for benchmark scores rather than genuine capability, and calls for diverse evaluation protocols and human oversight before any modified code is promoted to production environments.

The research sits within a growing body of work across major AI laboratories exploring how far autonomous self-improvement can be pushed before the risks of unchecked recursive modification outweigh the performance gains.

The recap

  • Meta researchers publish hyperagent framework for self-improving AI.
  • Hyperagent math grading score reached 0.630 in 50 iterations.
  • Code is available under a non-commercial license for researchers.
Defused News Writer profile image
by Defused News Writer

Explore stories