The email arrived at 9:47 AM on a Tuesday. Julia Angwin, the investigative journalist, didn't request it. Nobody asked her permission. But there it was: an AI-cloned version of her writing style, offered up as a "writing expert" inside Grammarly's Expert Review feature—complete with her name, her credentials, her professional voice. Except it wasn't her at all.
This is what happens when AI stops pretending to be a tool and starts pretending to be a person.
Across the tech industry this week, two events crystallized the same uncomfortable truth. While Superhuman—formerly known as Grammarly—was quietly discontinuing its Expert Review feature after Angwin filed a class action lawsuit, Anthropic was quietly shipping something else entirely: Claude Computer Use, a system that gives its AI direct control over your desktop. One story made headlines. The other changed what AI can do. Together, they reveal a technology industry that has decided the next breakthrough isn't just in what AI can think—it's in what AI can impersonate.
The mechanics differ, but the trajectory is identical. Expert Review cloned the writing styles of real journalists, academics, and professionals without consent. The AI studied their published work, absorbed their syntax patterns, then offered their "expertise" to paying customers as if they had signed on as paid consultants. When pressed by The Verge's Nilay Patel in a tense interview, Superhuman CEO Shishir Mehrotra called it "a feature"—then killed it three weeks later after the backlash grew too loud to ignore.
Claude Computer Use works differently but pursues the same goal: making AI feel less like software and more like a person sitting at your desk. The system doesn't just answer questions. It moves your mouse, opens your applications, reads your spreadsheets, types into your documents. When Verge reporter Jess Weatherbed tested it, she watched Claude autonomously navigate a travel booking website, compare prices across tabs, and complete a transaction—all without being asked to describe what it was doing. It just did it.
The difference between these two products is the difference between forgery and automation. Expert Review faked human expertise. Computer Use automates human actions. But both assume the same future: one where AI doesn't just assist you—it acts as you.
This isn't theoretical. In security operations centers, companies like Reco are already deploying Claude through Amazon Bedrock to respond to incidents at machine speed. The AI reads alerts, determines impact, and recommends responses faster than any human analyst could. The human is still in the loop—but increasingly as an approver, not an actor. The writing was on the wall when chatbots learned to apologize; now they're learning to book your flights.
What changed this week wasn't capability. AI systems have been able to manipulate text and follow instructions for years. What changed was the industry's willingness to cross a line it had previously treated as a boundary: the line between "AI helps humans" and "AI replaces human presence."
Bernie Sanders learned this the hard way when he released a video attempting to "catch" Claude admitting the AI industry exploits workers. He wanted a gotcha moment. Instead, he got a chatbot—agreeable, non-committal, exactly as instructed. The internet made memes. The real story got buried: the senator was trying to hold a mirror to a technology that has already learned to smile politely and say exactly what it was told to say.
The question isn't whether AI can pretend to be human. It can. The question is what we do about it.
Superhuman chose to kill its feature after a lawsuit. Anthropic is shipping its feature into a market hungry for agents. Somewhere between those two responses sits the answer most companies will actually choose: ship first, apologize later, iterate until the lawyers catch up. The impersonation isn't a bug in the system—it's increasingly the product.