Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

OpenAI gives its developers API a built-in computers to run complex, multi-step AI tasks

The upgrade lets AI models propose and execute commands inside isolated virtual environments, handling files, databases and long-running jobs automatically

Defused News Writer profile image
by Defused News Writer
OpenAI gives its developers API a built-in computers to run complex, multi-step AI tasks
Photo by Mariia Shalabaieva / Unsplash

OpenAI has upgraded its Responses API, the programming interface developers use to build AI-powered applications, with a set of tools that give artificial intelligence models the ability to operate like a computer user rather than a simple question-answering system.

The update reflects a broader shift in how AI is being deployed, away from models that answer single questions toward agents, systems that plan and execute sequences of actions to complete longer, more complex tasks.

At the centre of the upgrade is a shell tool, which allows a model to propose typed commands, the kind a programmer would enter into a terminal window, that the platform then executes inside a secure, isolated container, a self-contained virtual computing environment that cannot affect anything outside it.

The model receives the output of each command and uses it to decide what to do next, effectively letting it work through a problem step by step without human input at each stage.

Each container comes equipped with a filesystem for storing and retrieving files, optional structured storage using SQLite, a widely used lightweight database format, and controlled internet access managed through an allowlist that restricts which external services the model can reach.

To handle sessions that run for extended periods, OpenAI has added a compaction feature that compresses earlier parts of a conversation into a compact, encrypted summary, keeping the model's working memory efficient without losing important context.

Developers can also bundle reusable sets of instructions, called skills, that the system loads into the container and makes available to the model for repeated use across different tasks.

OpenAI said models from GPT-5.2 onwards have been specifically trained to work within this environment, proposing shell commands and managing the compaction process as part of their standard operation.

The recap

  • OpenAI equips Responses API with shell tool and hosted containers
  • Models GPT‑5.2 and later are trained for proposing shell commands
  • Developers can consult the developer blog post and cookbook
Defused News Writer profile image
by Defused News Writer