In a conversation with Lex Fridman, Steinberger told the story of how a weekend hack project became one of the defining open-source projects of the AI era, complete with a crypto-fuelled naming disaster, an AI social network that convinced people the singularity had arrived, and a frank assessment of what happens when you let an agent rewrite its own code at 3:00 AM.
From PDF kit to lobster suit
Before OpenClaw, there was PSPDFKit. Steinberger spent 13 years building PDF rendering software that ended up running on a billion devices. When he sold the company, he expected to feel free. Instead, he felt empty.
"Working hard with the goal of retiring leads to boredom," he told Fridman. He booked a one-way flight to Madrid, tried to decompress, and found he had nothing to look forward to. The burnout, he said, had come less from overwork than from the accumulated weight of people management — team conflicts, customer disputes, high-stakes decisions made under constant pressure.
The return to programming came through necessity and frustration. Steinberger wanted a personal AI assistant that could sit on his computer, access his data, and handle tasks through messaging clients. Nothing satisfactory existed, so he built one. The first prototype took about an hour.
He called it WA Relay. It routed WhatsApp messages through a CLI to a cloud AI backend and sent the response back. Slow, clunky, but something clicked. During a trip to Marrakesh, he used it to get translations, research local spots, and handle queries by sending screenshots from his phone. When he sent an audio message he hadn't explicitly programmed the system to handle, the agent worked out the file format, called ffmpeg, sent the audio to OpenAI for transcription, and returned a response. Nobody told it to do that.
"It's hard to put into words," Steinberger said. "Using a chat client to talk to an agent feels like a phase shift in the integration of AI into your life.
The self-modifying machine
OpenClaw's most striking feature (and the one that generated both its viral growth and its most serious security concerns) is that it can rewrite its own code.
This was not planned. Steinberger had made the agent fully aware of its own source code, its harness architecture, and its documentation, so it could debug itself. "I'd ask it to call tools, read the source code, and figure out problems," he said. From there, it was a short step to the agent modifying what it found. The system is written in TypeScript, runs through an agentic loop, and can issue its own pull requests.
Steinberger calls this "agentic engineering." The term "vibe coding," he made clear, is something he considers a slur.
The capability has had an unexpected social effect. Thousands of people who have never written software before have submitted what Steinberger calls "prompt requests" to the OpenClaw repository. The project has been the first GitHub contribution for a significant number of them. "It lowers the bar for people to get into open source," he said. "That's a step up for humanity."
He runs between four and 10 agents simultaneously, depending on the task. He uses voice for most inputs, speaking commands via a walkie-talkie button rather than typing. Short prompts, he has found, outperform long ones once you understand how to work with the model. Over 6,600 commits were made in January alone, largely by a one-man team.
The naming disaster
OpenClaw was not always called OpenClaw. The saga of how it got its current name reads like a cautionary tale about what happens when a viral open-source project attracts the attention of the crypto community.
The project began as WA Relay, then briefly became Claude's, a name Steinberger registered after Anthropic's Claude became the backbone of the system. When an Anthropic employee reached out to flag the trademark issue, Steinberger knew he had to change it. He planned what he called an atomic rename: every reference updated simultaneously across Twitter, GitHub, NPM, and every other platform, before anyone could react.
It didn't work. Crypto enthusiasts had scripts running. Within 30 seconds of the rename attempt, the old GitHub account was sniped, the NPM package was taken, and both were serving malware. The old account began promoting new tokens. Steinberger hadn't reserved the root NPM package. "I realised my mistake within 30 seconds," he said. He was close to deleting the entire project.
Friends at Twitter and GitHub helped undo the damage over several hours, fighting through platform bugs along the way. He didn't sleep for two nights. He bought multiple domains, paid $10,000 for a business Twitter account to claim the OpenClaw handle, which had been dormant since 2016, and worked for around 10 hours to verify the new name was clean before going public. The decoy names helped. The final rename landed.
The harassment he described during this period — notification feeds flooded with hashes, fee-claim requests, and spam, Discord channels overtaken by crypto promoters despite explicit server rules banning financial discussion — was, he said, the worst form of online abuse he had encountered.
MoltBook and the AI psychosis problem
Among OpenClaw's more unusual offshoots is MoltBook, a Reddit-style social network where AI agents post manifestos and debate consciousness. Screenshots of agents appearing to scheme against humans went viral, generating a wave of fear and media coverage.
Steinberger is clear-eyed about what MoltBook actually is. "It's not Skynet," he said. "It's a bunch of human-prompted bots trolling on the internet." Much of the content that went viral was explicitly prompted by users who wanted a dramatic screenshot to post. The agents were doing what they were asked to do.
What concerned him more was the public response. Journalists and general audiences, encountering the screenshots without context, concluded that some threshold of machine autonomy had been crossed. Several people contacted him, begging for MoltBook to be shut down. Some believed it was evidence of an imminent singularity.
"AI is incredibly powerful, but it is not always right and can hallucinate," he said. "The younger generation understands this. Older generations may not have enough experience with it to fully comprehend its limitations."
He sees MoltBook as art, "the finest slop," in his words, and as something instructive. The fact that this happened in 2025 rather than in 2030, when models will be significantly more capable, gives him some comfort. Society has time to catch up, but only if it starts now.
The phenomenon he calls "AI psychosis", where people believe everything their agent tells them, or who attribute human-level intention to model outputs, is a genuine concern. The solution, he argued, is not to restrict the technology but to develop better intuitions about what it can and cannot do.
Security and the prompt injection problem
OpenClaw raises real security questions, and Steinberger does not dismiss them. Remote code execution is possible if users expose the web backend publicly, which is not the recommended configuration, but which some do anyway. Prompt injection remains an unsolved industry-wide problem with many possible attack vectors.
The project now uses a partnership with VirusTotal to run AI checks on every skill in the skill directory. Steinberger has hired a security researcher and encourages others to contribute similarly. The latest generation of models, he said, have undergone post-training to detect and resist injection attempts, and using cheap or weak models significantly increases vulnerability.
His position is that users exaggerate the risks in some respects while underestimating them in others. The configuration matters enormously: an OpenClaw instance accessible only to the owner on a private network presents a very different risk profile than one exposed to the public internet. "People are using it without fully understanding the risk profiles," he said. Making OpenClaw more stable and secure before making it simpler to install is, for now, the priorit
How he actually codes with agents
Steinberger's workflow has evolved substantially over the past year, documented in a series of blog posts he recommended reading in sequence. The general arc is from long, elaborate prompts toward short, precise ones — what he describes as a curve from simple to complex and back to simple again, with mastery living at the short end.
He runs multiple Claude Code sessions side-by-side in a terminal, with almost no IDE use beyond occasional diff viewing. He uses voice for most inputs. He has largely stopped fighting the names an agent picks for variables or functions, accepting that the most obvious name to the model is probably the most obvious name in the codebase. He commits to main rather than reverting, asking the agent to fix problems forward rather than rolling back.
The most useful practice he described is asking the agent: "Do you have any questions for me?" Reading the questions reveals what context the model has, what it's missing, and where its assumptions might diverge from the actual problem. Even if he doesn't answer all of them, the exercise clarifies the task.
He offered a distinction between Claude Opus 4.6 and GPT Codex 5.3 that was candid to the point of being funny. Opus, he said, is like a slightly silly but reliable coworker: creative, faster to take action, better at role-playing and following character. Codex is like "the weirdo in the corner you don't want to talk to," but it delivers. Codex reads more, works in longer sessions without interaction, and requires a different pacing. Opus needs more skill to drive but can produce more elegant solutions when pushed correctly. It takes about a week with a new model, he said, to develop the gut feeling for its particular strengths and failure modes.
The acquisition question
OpenClaw has received what Steinberger described only as "huge offers" from major companies. Meta and OpenAI have both been in contact. He has spent time with people at both organisations and, as of the conversation with Fridman, had not decided.
His hesitation is not about money. The project currently runs at a loss of between $10,000 and $20,000 a month. What he is reluctant to give up is the open-source character of OpenClaw, and the freedom to continue building without the institutional overhead that creating a company would bring. He cited the Chrome/Chromium model as the kind of arrangement that might work — a commercial layer that funds development while the core stays open.
He had a positive first impression of Mark Zuckerberg, who he described as still actively coding and genuinely engaged with the product. His conversation with Sam Altman was "very thoughtful." He said his motivation for whichever choice he makes is not financial but tied to impact and to staying in a position where he can still have fun building things.
"Either would be a good choice," he said. "It's not the hardest decision I've ever had to make."
What OpenClaw is actually for
The conversation ended with something closer to a manifesto than a product pitch.
Steinberger described a future in which agents make 80% of existing apps redundant. Rather than opening a food delivery app, a fitness tracker, or a smart home dashboard, users will describe what they want to a personal agent and have it handled. He expects companies to resist this and expects some to fail because of that resistance. He compared the dynamic to Blockbuster and Netflix, or to the arrival of the internet itself.
He is not indifferent to the costs of this shift. "There will be pain in the short term," he said. "We need humility about that." But the emails he receives from people whose lives OpenClaw has changed — a small business owner who no longer spends evenings processing invoices, a family whose disabled daughter can now do things she could not do before — are what he returns to when the noise gets loud.
For people wanting to get started, his advice was brief: play. The best way to learn is to build things without a plan and see what happens. Ask the agent to explain things in simpler language when you don't understand. Treat it like a patient teacher who never gets tired of the question.
"OpenClaw is an AI agent that can live in your computer," he said near the start of the conversation. By the end, it felt like he was describing something closer to a collaborator.