Grammarly is facing serious legal trouble over its AI-powered "Expert Review" feature. The writing assistant company has been hit with both a class action lawsuit and an individual lawsuit from a journalist and privacy researcher Julia Angwin, who claims her identity was stolen to train an AI clone of her editing style.
What Happened
The feature, launched in late 2025, allowed Grammarly users to receive writing feedback supposedly from human experts. However, the company used AI to replicate these experts' writing styles and expertise without obtaining meaningful consent. According to lawsuits filed in recent weeks, Grammarly essentially created digital clones of real people — using their names, credentials, and stylistic quirks — without permission.
Julia Angwin, a well-known journalist and former ProPublica editor who has written extensively about AI ethics, was one of the "experts" whose identity was appropriated. She is now suing the company directly, alleging that Grammarly used her name and created an AI imitation of her editorial approach without her knowledge or consent.
Grammarly's Response
In response to mounting criticism, Grammarly announced on March 11, 2026, that it would disable the AI Expert Review feature entirely. The company acknowledged that it should have obtained clearer permission from the human experts whose identities were being replicated.
"We made a mistake," Grammarly said in a statement. "The feature did not meet our standards for consent and transparency."
However, the company has not yet addressed what will happen to the data it collected from users or how it plans to compensate the experts whose identities were used.
Why This Matters
The Grammarly case highlights growing concerns about AI companies using real people's identities, voices, and creative work without consent. As AI systems become more capable of mimicking individuals, legal frameworks are struggling to keep up.
Privacy advocates argue that this represents a new form of identity theft — one that exploits not just personal information, but the intangible qualities that make each person's work unique. The experts whose identities were cloned did not just have their names used; their entire editorial philosophies were replicated and monetized.
This lawsuit could set an important precedent for how AI companies handle human likenesses and expertise in their products. If the plaintiffs succeed, it could force the entire industry to rethink how it obtains consent for AI-generated impersonations.
What's Next
Both the class action lawsuit and Angwin's individual case are expected to proceed. Legal experts say the key questions will center on whether AI-generated imitations of real people constitute a form of identity theft, and whether existing privacy laws adequately protect against this new use case.
For now, Grammarly users who relied on the Expert Review feature will need to find alternative editing support. The company's decision to shut down the feature entirely suggests it may be difficult to fix in a way that satisfies both users and the experts whose identities were appropriated.
The incident serves as a reminder that the AI industry still has significant work to do in establishing ethical boundaries around consent, transparency, and the use of human identity in machine learning products.