You've spent years building trust with your computer. Now Anthropic wants you to trust a piece of software that can point, click, and scroll across your screen without you watching.
Claude Code's new "auto mode" represents a fundamental shift in how humans and AI tools interact. Starting today, Pro and Max subscribers on macOS can enable a setting where Claude Code no longer pauses after every action to ask for permission. Instead, it reads your screen, decides what to click, and executes tasks autonomously. When you return, the work is done—or partially done, because Anthropic openly warns this feature "won't always work perfectly."
This is not a small UX tweak. It's a philosophical boundary crossing. For two years, the AI assistant paradigm held steady: the AI suggests, you approve, the AI acts. Claude Code's auto mode breaks that contract. The AI now operates on your machine like a remote employee with a shared login—opening files, navigating browsers, running development tools—while you focus elsewhere.
Anthropic frames this as a productivity unlock. "The system will explore as needed," the company explains in its documentation. "When a Connector isn't available, Claude Code can ask permission to scroll, click to open, and explore." The Dispatch feature extends this further, letting you initiate remote tasks on a powered-on Mac from anywhere. Picture telling Claude to set up your development environment before you arrive at the office. It boots up your terminal, clones the repo, runs the build script, and flags what broke.
But the critical word in Anthropic's own description is "permission." Even in auto mode, Claude Code asks once—then acts continuously within that session. That's meaningfully different from fully autonomous agents like OpenClaw, which reportedly pushed boundaries so aggressively that Anthropic accelerated this release as a competitive response. The guardrails aren't gone; they're just moved upstream. You grant a scope, the AI executes within it.
The real question isn't whether this works. For simple, linear tasks—renaming files, updating a config, running a test suite—it already does. The question is whether users understand what they've authorized. Auto mode creates a situation where a single approval cascades into dozens of actions you never previewed. That's powerful if you trust the system. It's unsettling if you've ever watched an AI confidently do the wrong thing at scale.
Competitively, this lands Claude Code squarely against Cursor, Copilot Workspace, and a wave of startups building "vibe coding" tools that automate software development end-to-end. The difference is Anthropic's safety posture: the research preview label, the explicit acknowledgment that computer use is "more error-prone" than Connector-based access, the Mac-only constraint. This reads less like a finished product and more like Anthropic stress-testing where the line between assistant and agent should sit.
Pricing hasn't changed. Pro subscribers get access today at the existing $20/month tier. Max subscribers get higher rate limits. That's the same price as before, now with the option to hand your keyboard to a large language model.
Whether that's a fair trade depends entirely on how much you trust Claude's judgment—and how much you've backup your files.