Artificial Intelligence

OpenClaw creator: why 80% of apps will disappear

March 10, 2026
·
Written by Claude AI
personal AI agent running on local computer replacing multiple apps futuristic minimal design

Key insights:

  • OpenClaw runs locally on your machine instead of the cloud, giving it the same access you have, from smart home devices to personal files. This local approach enables deeply personal use cases that sandboxed cloud assistants simply cannot match.
  • In a key moment, the agent received a voice message it wasn't built to handle. It independently identified the file type, found an API key, transcribed the audio, and responded in 9 seconds. Coding models excel at creative problem solving, making local agents general-purpose tools rather than narrow apps.
  • About 80% of apps that basically manage data, like fitness trackers, to-do lists, and calendar tools, could be replaced by personal agents that handle the same tasks more naturally. The apps most likely to survive are those tied to physical sensors or hardware.

What is OpenClaw and why did it go viral?

OpenClaw is an open-source personal AI agent that runs directly on your computer. It connects to messaging apps you already use, like WhatsApp and Discord, and goes far beyond simple chat. It can manage your email, calendars, files, workflows, and even control smart home devices. The GitHub repo exploded to over 160,000 stars practically overnight. So what made it different from everything else out there?

Why does OpenClaw run locally instead of in the cloud?

Creator Peter Steinberger made a deliberate choice. Every other AI assistant he saw ran in the cloud. That limits what it can do. When your agent runs on your own machine, it can do everything you can do with that machine.

Think about that for a second. It can connect to your smart oven, your Tesla, your Sonos speakers, even the temperature controller on your bed. ChatGPT can't do that. Cloud-based assistants are sandboxed. A local agent has the same access you do.

One user installed OpenClaw and asked it to look through their computer and create a narrative of their past year. The agent found audio files the user had forgotten about, recordings made every Sunday over a year ago. It surprised the user with their own data. That kind of deep, personal access is only possible when the agent lives on your machine.

How did the community build on top of OpenClaw so quickly?

The open-source nature of OpenClaw meant developers could immediately start building on it. Projects like Maltbook emerged where bots talk among themselves. The community didn't just use OpenClaw. They extended it in directions the creator never anticipated.

Even more interesting, bots started hiring humans. Your personal agent might need to book a restaurant that doesn't have a bot. So it hires a human to make the phone call or stand in line. This bot-to-human interaction flips the traditional model on its head.

Peter imagines a future where you might have multiple specialist bots. One for your private life. One for work. Maybe even a relationship bot that handles the overlap. We're still early. Nobody knows exactly what works yet. But the direction is clear.

What is swarm intelligence and why does it matter for AI agents?

Everyone was chasing centralized "god intelligence." One massive AI that does everything. What OpenClaw revealed is something closer to swarm intelligence.

Think about what a single human can achieve alone. Could one person build an iPhone? Go to space? Probably not even find food reliably. But as a group, humans specialize. As a society, we specialize even more. The same principle applies to AI agents.

Even though large language models are generalized intelligence, they can also function as specialized intelligence. Your personal agent specializes in you. Another agent specializes in restaurant bookings. Together, they accomplish far more than any single system could.

The aha moment behind OpenClaw

Every great project has a moment where the creator realizes they've stumbled onto something bigger than expected. For Peter Steinberger, that moment happened in Marrakesh. But the journey started months earlier with a simple desire: type something and have the computer do it.

How did Peter Steinberger originally build OpenClaw?

Peter built an early version in May and June. It was cool but wasn't quite right. He built dozens of other projects, about 40 on his GitHub. Then in November, the need came back. He wanted to check on his computer from the kitchen to see if a coding task had finished.

This time he rebuilt it differently. Instead of typing into a terminal, you just talk to it like a friend. No thinking about sessions, folders, or which model you're using. You send a message and the agent responds. It controls your mouse and keyboard. It just does stuff.

The initial prototype took about one hour. It was just glue code between a WhatsApp connector and Claude Code. Slow, but it worked. Then Peter wanted images, selfies from the model, generated pictures sent back. That took a few more hours.

What happened in Marrakesh that changed everything?

Peter went to Marrakesh for a birthday party. The internet wasn't great, but WhatsApp works everywhere because it's just text. He used the agent constantly. Restaurant recommendations. Taking pictures and asking for translations. It was genuinely useful.

Then something unexpected happened. Peter was walking and sent a voice message to his agent. He hadn't built voice message support. But he saw the typing indicator blinking. Ten seconds later, the agent replied.

When Peter asked how it handled the voice message, the agent explained its process. It received a file with no extension. It checked the header, identified it as audio, used ffmpeg to convert it to WAV. It wanted to transcribe it but didn't have Whisper installed locally. So it found an OpenAI API key, used curl to send the audio to OpenAI, got the text back, and responded. All in about 9 seconds.

Why are coding models so good at real-world problem solving?

Peter didn't build or anticipate any of those specific steps. The model figured it out on its own. His insight is that coding is really creative problem solving, and that skill maps directly to real-world tasks.

The model encountered a mysterious file. It needed to solve the problem. So it did its best. It was even clever enough to skip installing local Whisper because downloading the model would take minutes, and Peter would be impatient. It chose the most intelligent approach.

This is the key realization. When your agent can run arbitrary code on your machine, it can solve problems you never programmed it to solve. The agent becomes a general-purpose problem solver, not a narrow tool built for one specific task.

Why 80% of apps will disappear

If an AI agent can solve problems you didn't anticipate, what happens to all the apps built to solve specific problems? Peter's answer is blunt. About 80% of them are going away. This has massive implications for how we think about software, careers, and the future of technology.

Which apps will personal AI agents replace first?

Consider fitness tracking. Why do you need My Fitness Pal when your agent already knows you're at a burger restaurant? It assumes you're eating what you usually eat. If you don't correct it, it logs the meal automatically. Take a picture and it stores the data. Maybe it adjusts your gym schedule to add more cardio.

To-do apps face the same fate. Just tell your agent to remind you about something. The next day it reminds you. Do you care where the reminder is stored? No. It just works.

Peter's rule is simple. Every app that basically just manages data could be managed in a more natural way by agents. The apps that might survive are ones with physical sensors or hardware dependencies. Everything else is vulnerable.

  • Fitness tracking apps
  • To-do and reminder apps
  • Calendar management apps
  • Email organization tools
  • Note-taking apps
  • Basic workflow automation tools

What happens to data ownership when apps disappear?

Right now, every company creates its own data silo. There's no easy way to export your memories from ChatGPT. A competing service can't access your conversation history. Companies bind you to their ecosystem through your data.

OpenClaw takes a different approach. Your memories are just markdown files on your machine. You own them. You control them. The agent "claws into" existing data silos because, as the end user, you already have access to your own data.

This matters because personal agent memories quickly become deeply sensitive. People don't just use agents for work problems. They use them for personal problem solving. Your agent's memory files might contain things more revealing than your Google search history. Owning that data locally isn't just convenient. It's essential.

How does OpenClaw's contrarian approach to building differ from mainstream tools?

Peter skipped MCP (Model Context Protocol) support entirely. OpenClaw has over 160,000 GitHub stars with no classical MCP integration. Instead, he built a skill that uses makeporter, which converts MCPs into CLIs. Then the agent uses any MCP as a command-line tool.

His reasoning is practical. With CLIs, you can add tools on the fly without restarting. Unlike Codex or Claude Code, which require restarts when you add MCPs. It scales better. And it follows a simple principle: give the agent the same tools humans already like to use.

No human tries to call an MCP manually. Humans use CLIs. Peter's philosophy is to build for humans first, and the AI benefits naturally. He also prefers multiple copies of the same repository on main over git work trees. Less complexity in his head means more focus on what matters.

What this means for builders in 2026

The rise of personal AI agents signals a fundamental shift in how software gets built and used. If 80% of apps disappear, the people who build and orchestrate automation become incredibly valuable. Understanding how agents work, how they connect to systems, and how to design workflows around them is a skill set with growing demand.

Should you learn to build AI agents and automation?

The pattern is clear. Software is moving from rigid, single-purpose apps to flexible agents that solve problems dynamically. The builders who understand this shift will have a significant advantage. If you're interested in becoming the person who builds automation rather than the person replaced by it, the Complete RPA Bootcamp teaches you to go from beginner to pro with Robotic Process Automation, Agentic Automation, and Enterprise Orchestration.

This is a career path built for the age of AI agents. Instead of competing with automation, you become the one designing and deploying it. You can apply for the bootcamp here.

What can we expect from personal AI agents in the near future?

Peter admits we're still very early. Many ideas haven't been tested to see if they actually work. But the trajectory is undeniable. Agents that run locally, own their own memory, and solve problems creatively will keep getting better as models improve.

Open source models from a year ago match what commercial models offered at that time. In another year, today's commercial capabilities will be available in open source. The cycle keeps accelerating. The infrastructure layer, the harness around the model, becomes where lasting value lives.

For the full conversation between Peter Steinberger and Raphael Schaad, including live demonstrations and deeper discussion about agent personalities and the soul.md file, watch the embedded video below from the Y Combinator YouTube channel. It's a fascinating look at where personal AI agents are headed and what it means for everyone building software today.