On Wednesday, The Verge published something that should make any content ops lead at a 25–50 person agency stop and read the whole thing. Grammarly's "Expert Review" feature — quietly rolled out in August 2025 — had been surfacing AI-generated writing feedback in Google Docs under the names of real, living journalists. Not vague "inspired by style" notes buried in an info panel. Comments that appeared to be from actual named people, formatted to look just like a real collaborator's suggestion.
The list of people whose identities Grammarly used without asking: Nilay Patel (editor-in-chief, The Verge), David Pierce (editor-at-large, The Verge), Sean Hollister, Tom Warren, Lauren Goode (Wired), Mark Gurman (Bloomberg), Kashmir Hill (New York Times), Jason Schreier (Bloomberg), and dozens more. None were asked. None consented.
Grammarly's parent company, Superhuman — which rebranded from Grammarly in October 2025 — responded by saying the experts "appear because their published works are publicly available and widely cited." That's the company's complete defense. Public availability as blanket license to impersonate.
Consent Theater at Scale
The feature's mechanics deserve scrutiny. When a user clicks "Expert Review" in the Grammarly sidebar, the AI analyzes their document and surfaces suggestions "inspired by" relevant experts. In Google Docs, those suggestions appear as what look like user comments — the kind your actual collaborators leave. The only signal that it's AI-generated is a small label that's easy to miss on a busy document.
The Verge tested this and found something worse than the identity issue: the suggestions weren't even accurate to the real people they claimed to represent. A comment attributed to Verge senior editor Sean Hollister recommended adding a parenthetical with context that was already present elsewhere in the document — the exact kind of redundancy that the real Hollister, who edits for concision, would flag against. The AI wasn't reflecting Hollister's actual editing style. It was generating generic suggestions and attaching his name to them.
The source links attached to suggestions were similarly unreliable. Multiple links went to spammy archive copies of legitimate sites, dead pages, or in some cases, articles by entirely different people than the named expert. The sourcing infrastructure meant to validate the "inspiration" didn't hold up to a single click.
The Dead-Writer Problem Is the Smaller Issue
Coverage focused heavily on Grammarly naming deceased academics as reviewers — Carl Sagan, recently passed professors — which is genuinely unsettling. But for business operators, the more immediate exposure is the living-person problem.
When a business's document runs through Grammarly and receives a comment labeled with a real journalist's name, a few things have happened that carry actual liability weight:
- A real person's professional identity was used commercially without authorization to generate output that may influence business decisions.
- The output was inaccurate to that person's actual views or practices.
- The business employee reading that comment had no reliable signal that it wasn't real.
Superhuman's terms of service don't resolve this. The company's position — that public work equals implied license — is a legal theory, not settled law, particularly under right-of-publicity statutes that several of the affected journalists are now presumably in a position to test.
Inside Your Documents, It Runs Automatically
The practical exposure for agency content teams isn't theoretical. Grammarly Business accounts push the sidebar into Google Docs, Word, and email clients across the team. If Expert Review was enabled — and it's on by default — it was running on client proposals, strategy documents, press releases, and internal reports.
A content ops lead at a 35-person agency managing 6–10 client accounts has potentially had AI-generated feedback, carrying the names of real editors they may actually know professionally, flowing into documents that informed client deliverables. That's not a "future concern to monitor." That's a question worth answering in the next team standup.
The fix is simple: disable Expert Review in the Grammarly Business admin console under AI features. It's a one-toggle change. The harder question is whether you trust a vendor that built this feature to be the default in your document environment.
The Exposure Your Legal Team Will Ask About
Grammarly holds an unusual position in the business software stack: it's an ambient tool, meaning most people don't consciously invoke it. It runs in the background, inserting suggestions into whatever you're writing. That ambient presence is what made it valuable at scale. It's also what makes this feature's rollout structurally different from, say, an AI review tool you actively launch.
Superhuman's rebrand was supposed to signal a shift toward a full AI productivity platform. The Expert Review feature was part of that positioning — a demonstration that Grammarly could go beyond spell-check into high-stakes editorial judgment. What it actually demonstrated is that the company is willing to attach real professionals' identities to AI outputs and defend it with a sentence about public availability.
There's a version of this feature that could work: synthetic personas, fictional editorial archetypes, or actual opt-in partnerships with named experts. Grammarly built none of those. It lifted names, attributed fabricated advice to them, surfaced that advice in documents where it looked like peer review, and waited for someone to notice.
Someone noticed. The question now is what your team does with that information before the next document goes out.
