Artificial Intelligence

OpenClaw: The Viral AI Agent that Broke the Internet - Peter Steinberger

March 10, 2026
·
Written by Claude AI
AI agent lobster robot assistant working on multiple computer screens in a futuristic workspace

Key insights:

  • Working effectively with AI agents follows a surprising curve: beginners use simple prompts, intermediates over-engineer with complex multi-agent setups, and experts return to simple prompts, but with deep system understanding that makes simplicity work.
  • Empathizing with how an agent sees your codebase is a practical skill. The agent starts each session knowing nothing about your project, so a few targeted pointers about where to look matter more than detailed instructions or micromanagement.
  • Personal AI agents will likely replace most standalone apps because a single agent with full context about your life can make better decisions than any individual app. Companies that expose APIs for agents will survive; those that don't will get automated around.

How a One-Hour Prototype Became the Fastest-Growing GitHub Project in History

Peter Steinberger's journey with OpenClaw is one of the most compelling stories in recent AI history. What started as a simple WhatsApp relay to Claude Code became an open-source AI agent framework that exploded to over 180,000 GitHub stars. In a conversation with Lex Fridman, Peter shared the full story of how he built it, the chaos that followed, and where personal AI agents are headed.

What Was the Original OpenClaw Prototype?

The idea was simple. Peter wanted a personal AI assistant he could talk to through WhatsApp. He had been experimenting with different projects, including pulling WhatsApp data into GPT-4.1's million-token context window and asking it questions about his friendships. The results were surprisingly profound, but he assumed the big AI labs would build something like this themselves.

By November, nobody had. So he prompted it into existence. The first prototype was literally just hooking up WhatsApp to Claude Code's CLI. A message comes in, it calls the CLI with the prompt flag, gets a string back, and sends it to WhatsApp. He built it in one hour.

Even in this bare-bones state, it felt different. Being able to sit back and talk to an AI through a chat client, rather than sitting behind a computer using Cursor or a terminal, created a fundamentally different experience. It felt like a phase shift in how AI integrates into your life.

When Did the Agent First Surprise Its Creator?

The mind-blowing moment came during a trip to Marrakesh. Peter had been using the bot for translations, restaurant recommendations, and general questions. Then he absent-mindedly sent an audio message. The bot wasn't built to handle audio. But a typing indicator appeared, and it replied.

The agent had received a file with no file ending. It checked the file header, found it was Opus format, used ffmpeg to convert it, discovered Whisper wasn't installed, found an OpenAI API key, used Curl to send the file to OpenAI for transcription, and responded. All without being taught any of those steps.

Peter's reaction was immediate: "How the fuck did he do that?" The agent had demonstrated creative problem-solving, world knowledge, and resourcefulness that went far beyond what was explicitly programmed. That was the moment it clicked. This wasn't just a relay. This was something new.

Why Did OpenClaw Win Against So Many Competitors?

In 2025 and 2026, countless startups and companies were building agentic AI tools. So why did OpenClaw destroy everybody? Peter's answer is disarmingly simple: "Because they all take themselves too serious."

He wanted it to be fun. He wanted it to be weird. The lobster branding, the space lobster in a TARDIS, the playful personality baked into the agent's soul, all of it created something people genuinely enjoyed using. While enterprise companies were building serious, buttoned-up products, Peter was having the time of his life building in the open, using his agent to build the agent itself.

The other key factor was making the agent deeply self-aware. It knows its own source code. It understands how it sits in its harness. It knows which model it runs. This self-awareness made it trivially easy for the agent to modify its own software. People talk about self-modifying software as a theoretical concept. Peter just built it.

The Drama Behind the Name Changes and Security Battles

Building OpenClaw wasn't all fun. Peter faced some genuinely dark moments that nearly made him delete the entire project. The name change saga alone is a masterclass in the chaos of building something viral in the modern internet.

What Happened When Anthropic Asked for a Name Change?

The project originally went through several names: WA-Relay, then Claude's (spelled with a W, C-L-A-W-D-E), then ClawBot. When it exploded in popularity, Anthropic sent a friendly but firm email asking Peter to change the name. They could have sent lawyers. Instead, they were nice about it. But the pressure was real.

Peter asked for two days. Changing a name means finding everything: Twitter handles, domains, NPM packages, Docker registry, GitHub accounts. You need a complete set of everything before you can make the switch. And then there were the crypto snipers.

Every half hour, someone would spam the Discord trying to tokenize the project. Peter's notification feed became unusable. The crypto community swarmed him, trying to get him to "claim the fees." He described it as the worst form of online harassment he'd ever experienced.

How Did the Atomic Name Change Go Wrong?

When Peter finally renamed to MoltBot (a name he didn't love), everything that could go wrong did go wrong. He had two browser windows open. He renamed one account, dragged his mouse to the other window to claim the old name, and in those five seconds, snipers stole the account. Five seconds.

The old account immediately started promoting tokens and serving malware. He moved to rename on GitHub, accidentally renamed his personal account instead of the project, and in 30 seconds the snipers had his personal account serving malware too. They sniped his NPM package. It was a cascade of failures.

Peter was close to crying. Close to deleting everything. "All I wanted was having fun with that project," he said. What saved the project was thinking about all the contributors who had already put time into it. He couldn't let them down.

Eventually, he came up with OpenClaw, called Sam Altman to make sure the name was okay, and executed a carefully planned rename with contributors helping in full secrecy. He even created decoy names to throw off the snipers. He paid $10,000 for a Twitter business account to claim the handle. The Manhattan Project of renaming a GitHub repo.

What Are the Real Security Concerns with OpenClaw?

OpenClaw gives an AI agent full system access to your computer. With great power comes great responsibility. Peter was initially annoyed by security researchers because many of the reported vulnerabilities came from people putting the web backend on the public internet, something he explicitly warned against in the documentation.

But he came to accept that security is a serious focus. He partnered with VirusTotal (part of Google) to check every skill with AI. Prompt injection remains an industry-wide unsolved problem, but the latest generation of models has significant post-training to detect injection attempts. Peter's public Discord bot would laugh at people trying to prompt inject it.

His practical security advice is straightforward:

  • Don't use cheap or weak models. They're much more gullible to prompt injection.
  • Make sure you're the only person who talks to your agent.
  • Keep it on a private network, not the open internet.
  • Run the security audit tool that comes with OpenClaw.
  • Understand the risk profile before giving it access to sensitive data.

As models get smarter, the attack surface decreases, but the potential damage increases because the models become more capable. It's a three-dimensional trade-off that the entire industry is working through.

The Art and Science of Agentic Engineering

Peter has strong opinions about how to work effectively with AI agents. He's documented the evolution of his dev workflow across several blog posts, and the insights are practical and immediately applicable for anyone building with AI.

What Is the Agentic Trap and How Do You Avoid It?

Peter describes a curve he calls the agentic trap. Beginners start with short prompts: "Please fix this." Then they discover the power and go wild with eight agents, complex orchestration, multi-checkouts, chaining agents together, custom sub-agent workflows, libraries of slash commands. Super organized, super complicated.

But the elite level circles back to simplicity. Short prompts again. "Hey, look at these files and then do these changes." The difference is that now you understand the system deeply enough to know that simplicity works.

Peter calls vibe coding a slur. He prefers "agentic engineering." Though he admits that after 3:00 AM, he switches to vibe coding and has regrets the next day. The walk of shame of cleaning up messy code the morning after.

The key insight is that working with agents is a skill, like playing guitar. You don't pick up a guitar once, play badly, and declare the instrument is garbage. You practice. You learn the language of the agent. You understand where they're good and where they need help.

How Should You Think About the Agent's Perspective?

One of Peter's most powerful ideas is empathizing with the agent. Consider how Codex or Claude sees your codebase. They start a new session knowing nothing about your project. Your project might have hundreds of thousands of lines of code. You need to guide them.

This doesn't require a lot of work. A few pointers about where to look and how to approach the problem go a long way. The agent's view of the project is never complete because the full thing doesn't fit in context. So you provide the system understanding that bridges the gap.

Peter approaches it like leading an engineering team. Your employees won't write code the same way you do. Maybe it's not as good as you would do. But it pushes the project forward. If you breathe down everyone's neck, they'll hate you and you'll move slowly. The same applies to agents.

Don't fight the names they pick. The name that's most obvious to the agent is likely the one in the weights that it'll search for next time. If you override it with your preference, you're making it harder for the agent to navigate your codebase. It requires letting go.

What Does Peter's Actual Dev Setup Look Like?

Peter runs multiple agents simultaneously, between four and ten depending on the task complexity and how much he slept. He uses two MacBooks, one driving two big anti-glare screens. Each screen has terminals split with Codex sessions and a bit of actual terminal at the bottom.

Here's the surprising part: he uses voice for most of his prompting. He presses a walkie-talkie button and just talks to the agent. He used to do it so extensively that he lost his voice at one point. For terminal commands like switching folders, he types. But for actual agent interaction, it's conversation.

His prompts used to be long. Now they're short. He doesn't write them, he speaks them. "These hands are too precious for writing now," he jokes. He uses bespoke spoken prompts to build his software.

He never reverts. Always commits to main. No develop branch. If something goes wrong, he asks the agent to fix it rather than rolling back. He runs tests locally (inspired by DHH) and pushes to main if they pass. Main should always be shippable.

How Does Claude Opus 4.6 Compare to GPT Codex 5.3?

Peter has extensive experience with both models and offers a colorful comparison. Opus is like the coworker who's a little silly sometimes but really funny, and you keep them around. Codex is like the weirdo in the corner you don't want to talk to, but is reliable and gets things done.

As a general-purpose model, Opus is the best. It excels at role play, going deep into character, and following commands. It's fast at trying things, more tailored to trial and error, and very pleasant to use. But it's "a little too American," as Peter puts it. Codex, by contrast, has a more European, dry personality. It reads more code by default and doesn't require as much interactive guidance.

If you're a skilled driver, you can get good results with either. Peter prefers Codex because it doesn't require as much charade. It reads code extensively, disappears for 20 minutes, and comes back with results. Opus requires more interactive steering with plan mode to prevent it from running off with localized solutions.

His advice for switching models: give it a week to develop a gut feeling. And don't judge a model on its cheapest tier. The $20 OpenAI version is slow compared to the $200 Claude Code experience, which creates an unfair comparison.

The Future of Personal AI Agents and What It Means for Everyone

OpenClaw isn't just a cool project. It represents a fundamental shift in how humans interact with technology. Peter believes we're at the beginning of the age of personal agents, and the implications are massive.

Will AI Agents Replace 80% of Apps?

Peter thinks so. Why do you need MyFitnessPal when your agent already knows where you are and can assume you're making questionable food choices at Waffle House? Why do you need an Eight Sleep app when your agent knows you're home and can control the bed directly?

Your agent can modify your gym workout based on how well you slept. It can show you custom UI exactly how you like it. It has more context to make better decisions than any single app could. The subscription model for individual apps starts to look absurd when one agent can do it all.

This means a lot of software companies will need to rapidly transform into being agent-facing. Companies that build APIs and make it easy for agents to interact with their services will thrive. Companies that fight it will become the Blockbusters of the AI age. Apps will become APIs whether they want to or not, because agents can figure out how to use a browser.

Peter watched his agent happily click the "I'm not a robot" button. The internet is slowly closing down to agents, but residential IPs and browser automation make most barriers temporary speed bumps rather than real walls.

Will AI Replace Programmers Completely?

We're definitely heading in that direction, Peter says. But programming is just one part of building products. What do you actually want to build? How should it feel? What's the architecture? Agents won't replace all of that.

The actual act of writing code will become like knitting. People will do it because they enjoy it, not because it's necessary. Peter resonates with the idea that it's okay to mourn our craft. He spent years in deep flow states, cranking out beautiful code, finding elegant solutions. That specific experience is fading.

But you can get a similar state of flow by working with agents. It's different, but it's still building. And programmers are uniquely equipped for this moment. They understand systems thinking. They can empathize with how an agent sees the world. They can feel when something is off in the CLI. These skills transfer directly to agentic engineering.

If you're a programmer worried about the future, Peter's advice is clear: stop seeing yourself as a programmer. You're a builder. Your general knowledge of how software works transfers into every new domain. The fine-grained details, agents can handle. Your system understanding and architectural intuition remain invaluable.

How Can Beginners Join the Agentic AI Revolution?

Peter's number one piece of advice: play. Playing is the best way to learn. If you have an idea in your head, just build it. It doesn't need to be perfect. He built a whole bunch of stuff he doesn't use. It doesn't matter. It's the journey.

You have an infinitely patient teacher available at all times. You can ask it to explain anything at any level of complexity. It used to take days to get an answer on Stack Overflow. Now you just ask. The barrier to entry for building software has never been lower.

For those who want to go deeper, getting involved in open source is invaluable. Be humble. Maybe don't send a pull request right away. Read code. Join Discord communities. Understand how things are built. Pick something interesting and get involved.

If you're serious about building a career around automation and AI agents, structured learning can accelerate your progress dramatically. The Complete RPA Bootcamp takes you from beginner to professional in Robotic Process Automation, Agentic Automation, and Enterprise Orchestration. Instead of worrying about AI replacing your job, you become the person building the automation. It's a practical path into one of the most in-demand skill sets of the AI age.

Peter's story proves that one person with the right skills and mindset can build something that changes the world. OpenClaw went from a one-hour prototype to the fastest-growing GitHub project in history. The tools are there. The models are there. The only question is what you'll build with them.

For the full conversation, including Peter's thoughts on acquisition offers from OpenAI and Meta, the MoltBook saga, and his philosophy on money and happiness, watch the complete interview embedded below on the Lex Fridman YouTube channel. It's one of the most entertaining and insightful conversations about the current state of AI you'll find anywhere.