Perplexity AI has released what it is calling Perplexity Computer, a browser-based autonomous agent that requires no downloads, no setup, and, for now, no payment. The company says the tool runs 19 AI models simultaneously inside a single tab, parcelling out tasks between them depending on what is needed: Claude handles reasoning, Gemini handles research, and a further 17 models manage everything else.
The launch has generated significant attention online, with early testers describing results that range from impressive to startling. Whether those results hold up under broader scrutiny is a question the product will spend the coming weeks answering.
What Perplexity Computer actually does
The core proposition is an AI agent that can take a plain-language prompt and execute a multi-step task without requiring the user to stay at their computer. The system runs in the cloud, which means a job assigned in the morning can complete itself while the user is offline.
In one widely circulated test, a user asked the tool to build a live stock analysis terminal for Nvidia. After 11 minutes, the system produced what was described as a Bloomberg-style dashboard with real-time data charts and earnings breakdowns.
That comparison is pointed. A Bloomberg Terminal, the financial data platform used by traders, analysts, and journalists across the industry, costs roughly $27,000 per year per user. The suggestion that a free browser tool can approximate its output, even partially, carries real weight in industries where that subscription is a high operating cost.
Perplexity Computer also connects to Google Workspace, Slack, and GitHub, and the company says it can retain context across sessions, meaning it remembers ongoing projects rather than starting fresh each time.
Reading the hype carefully
The framing around this launch has been extravagant. Claims circulating on social media have called it the most dangerous AI release of 2026, a characterisation that tells you more about how AI products are currently being marketed than about the product itself.
Much of the early coverage originates from short-form video creators sharing demos rather than from independent technical evaluation. A single successful test of a stock dashboard, however polished, is not a rigorous assessment of a tool's reliability, accuracy, or limitations at scale.
That is not a reason to dismiss Perplexity Computer. It is a reason to separate what has been demonstrated from what has been asserted. What has been demonstrated: the tool can produce a visually coherent financial dashboard in under 15 minutes. What remains asserted: that this constitutes a general replacement for professional-grade data infrastructure.
Why the multi-model approach matters
The more technically interesting claim is the architecture itself. Running multiple frontier models in parallel, each assigned to tasks suited to its strengths, is a meaningful design decision. Using Claude for reasoning and Gemini for research reflects genuine differences in how these models perform, and routing tasks accordingly rather than relying on a single model is an approach that enterprise AI builders have been experimenting with for some time.
If Perplexity has built a reliable orchestration layer that handles this routing well, that is a substantive engineering achievement, not just a marketing talking point. The question is whether the coordination between 19 models produces coherent, accurate outputs consistently, or whether the failures are simply harder to spot than in a single-model system.
Free, for now
Perplexity is offering the tool at no cost during its current phase, which is a competitive move worth noting in a market where comparable agentic tools from other providers carry meaningful price tags.
Free access during launch is a standard growth strategy for AI products, and pricing almost always changes once adoption reaches a sufficient scale. Early users evaluating whether to build workflows around Perplexity Computer should factor that in.
What the company has released is genuinely interesting. What it has not yet done is prove that interesting scales into reliable, at the level of the professional tools it is being compared to. That proof takes time, testing, and a great deal more than a Bloomberg dashboard built in one clean demo.