Artificial Intelligence

Claude new computer use in 6 minutes

March 27, 2026
·
Written by Claude AI
AI assistant controlling a computer desktop with mouse and keyboard automation

Key insights:

  • Claude uses a layered approach: it first tries pre-built API connectors for speed and reliability, then falls back to visual screen control only when no connector exists for an app.
  • The real value isn't in single tasks (which you can do faster manually) but in multi-step workflows across several apps and remote desktop control from your phone via Dispatch.
  • Claude treats the screen as continuous feedback, letting it catch and correct its own mistakes, but this self-correction loop adds significant time to each task.

Claude can now control your entire computer

Anthropic just shipped a major update to Claude. The AI can now see your screen, move your mouse, type on your keyboard, and interact with any application on your machine. This isn't limited to a browser anymore. It works across your entire desktop.

The feature is called computer use, and it's available through both the Claude desktop app and the mobile app via a feature called Dispatch. If you've been following the progress of AI agents, this is a significant step forward in what a consumer AI product can actually do.

What is Claude computer use?

Claude computer use gives the AI the ability to visually interpret what's on your screen and take real actions. It can click buttons, open apps, type text, navigate menus, and move between applications just like a human would.

Previously, Claude had a similar capability limited to controlling your Chrome browser. Now that same approach extends to every application on your machine. Think calendar apps, reminders, delivery apps, photo editors, and anything else you use daily.

How does Dispatch work with computer use?

Dispatch is a feature Anthropic released that lets you control your desktop computer from your phone. You pair your phone with your desktop, and then you can send instructions to Claude from your mobile app. Claude executes those instructions on your computer.

This means you could be away from your desk and still tell Claude to open an app, update a document, or run a multi-step workflow on your machine. It also supports sending screenshots back to your phone so you can see what's happening.

Who can access this feature?

Computer use is available on Claude's Pro and Max plans. You need the latest version of both the desktop and mobile apps installed. You also need to go into settings and explicitly enable the computer use feature. It won't be turned on by default.

The setup process is straightforward:

  1. Update your Claude desktop app to the latest version.
  2. Update the Claude mobile app if you want to use Dispatch.
  3. Pair your phone with your desktop through the Dispatch feature.
  4. Enable computer use in your settings.

How Claude decides which tools to use

One of the interesting design decisions Anthropic made is how Claude chooses between its built-in connectors and direct computer control. It doesn't just jump straight to controlling your screen. There's a layered approach.

What are connectors and how do they work?

Claude has pre-built connectors for popular apps like Slack, Google Calendar, and others. These connectors use APIs to interact with those services directly. They're faster and more reliable than visual screen control.

When you ask Claude to do something, it first checks if a connector exists for that app. If it does, Claude uses the connector. If no connector is available, Claude falls back to computer use and interacts with the app visually on your screen.

Why does Claude ask for permission before controlling apps?

When Claude needs to fall back to direct computer control, it asks for your permission first. This is a safety measure. You get to approve which applications Claude can interact with during a session.

For example, if you ask Claude to update your reminders app and no connector exists for it, Claude will prompt you with something like "Claude wants to use Reminders." You approve it, and then it proceeds. This keeps you in control of what the AI can touch on your machine.

Is this approach similar to RPA?

If you're familiar with Robotic Process Automation (RPA), this will feel very familiar. RPA tools have been doing screen-based automation in enterprises for years. The difference here is that Claude uses visual AI to interpret the screen dynamically rather than relying on pre-mapped selectors or coordinates.

This makes it more flexible but also less predictable. Traditional RPA bots follow deterministic paths. Claude has to figure out the interface as it goes, which means it can adapt to changes but also makes mistakes along the way.

If this kind of automation interests you, the Complete RPA Bootcamp teaches you how to build professional automation solutions using RPA, agentic automation, and enterprise orchestration. It's designed to take you from beginner to pro, and it's a strong career move as AI and automation reshape the job market.

Real demos of Claude computer use

The best way to understand this feature is to see it in action. The demos shown cover both simple one-off tasks and more complex multi-step workflows.

Can Claude add events to your calendar?

In the first demo, Claude was asked to add a note to the desktop calendar app between 6 PM and 7 PM with the text "be home for groceries." Claude opened the calendar app, navigated to the correct date and time, and added the event.

While it was working, the screen displayed a pulsating orange border around the edges to indicate that Claude was actively controlling the computer. Once the task was done, the small agent window expanded back to full view. The whole process required minimal instruction.

How does Claude handle multi-app workflows?

The second demo was more ambitious. The prompt asked Claude to:

  • Open the Reminders app and add pancakes, syrup, and butter to a grocery list.
  • Navigate to Uber Eats.
  • Find the nearest Target store.
  • Search for those items and add them to the cart.

Claude worked through each step sequentially. It opened Reminders, added the items (though it made a typo on "butter"), then moved to Uber Eats, searched for Target, and started adding grocery items to the cart. It even picked a specific product, Kodiak power cakes flapjack waffle mix.

This kind of multi-app workflow is where computer use gets genuinely useful. Stitching together actions across different applications without any pre-built integration is powerful.

What happens when Claude makes mistakes?

Claude isn't perfect. In the demo, it misspelled "butter" as "utter" when adding it to the Reminders app. However, because it can see the screen, it was able to recognize the error and go back to fix it.

This self-correction ability is important. The AI treats the screen as continuous feedback. If something doesn't look right, it can attempt to course correct. That said, this process takes time and adds to the overall duration of the task.

Speed, reliability, and honest expectations

It's easy to get excited about this feature, but it's worth setting realistic expectations about where things stand today.

Is Claude computer use faster than doing tasks yourself?

Honestly, no. Not yet. For most simple tasks, you'll be faster doing them manually. Claude has to run inference on every screen it sees, decide what action to take, execute it, then check the result. Each step adds latency.

When errors happen, the correction loop adds even more time. For quick tasks like adding a calendar event, you could do it in seconds. Claude might take a minute or more.

When does computer use actually make sense?

The real value shows up in two scenarios. First, multi-step workflows that span several applications. If you need to pull data from one app, process it in another, and update a third, Claude can handle the tedious navigation while you focus on something else.

Second, remote control via Dispatch. If you're away from your computer and need something done on your desktop, being able to instruct Claude from your phone is genuinely useful.

For anything that requires speed and precision on a single app, you're still better off doing it yourself.

How does this compare to tools like Open Interpreter?

This is a question many people are asking. Tools like Open Interpreter also let AI control your computer, but they typically work through code execution rather than visual screen interaction. Each approach has trade-offs.

Claude's visual approach is more flexible since it works with any app that has a GUI. Code-based approaches are faster and more reliable for tasks that can be scripted. Some users will likely combine both, using Claude's computer use for GUI-heavy tasks and code-based tools for everything else.

What does this mean for the future of automation?

This is clearly early days. The infrastructure works. The concept is proven. But the speed and reliability need to improve before this becomes a daily driver for most people. Anthropic will likely iterate quickly on this, and as models get faster and more accurate at visual understanding, the experience will improve significantly.

For a detailed walkthrough of Claude's computer use in action, including the full calendar and grocery shopping demos, watch the video embedded below from the Developers Digest YouTube channel. Seeing the pulsating screen border and the real-time error correction gives you a much better sense of how this feature actually feels to use.