
Grammarly will keep using authors’ identities without permission unless they opt out
Last week, my colleagues discovered that Superhuman's Grammarly had turned me into an AI editor, using my real name, without ever asking my permission. They did the same to my boss Nilay Patel, my colleagues David Pierce and Tom Warren, and - as Wired initially reported last Wednesday - many authors far more famous than us. Grammarly's new "Expert Review" feature uses our names to give its AI suggestions credibility that they don't deserve. Now, Grammarly has finally addressed the backlash - but not by apologizing, and not by walking the feature back. For now, it will graciously give us the chance to opt-out of something we didn't know it … Read the full story at The Verge.
# Grammarly Is Using Your Name to Sell AI Without Permission—Here's What You Need to Know
Grammarly has quietly turned thousands of writers, journalists, and content creators into unwilling AI endorsers, and the company is betting most of you won't notice—or won't care enough to opt out. This isn't a minor privacy hiccup; it's a fundamental shift in how tech companies are willing to use your identity, and it reveals the murky landscape of artificial intelligence development in 2026, where your reputation can become a product without your consent. Understanding what Grammarly did, why it matters, and what you should do about it is essential if you care about controlling your own image in an AI-saturated world.
Last week, journalists at *The Verge* uncovered that Grammarly's new "Expert Review" feature was using real names and likenesses of acclaimed writers—including *The Verge's* own Editor-in-Chief Nilay Patel, reporters David Pierce and Tom Warren, and numerous other high-profile authors—to lend credibility to its artificial intelligence writing suggestions. The kicker? Nobody gave permission. Grammarly didn't ask. It simply activated the feature and began attributing AI-generated editorial advice to real people without their knowledge or consent. When confronted with the backlash, Grammarly didn't apologize or remove the feature. Instead, the company offered affected individuals the chance to opt out—a response that privacy advocates say inverts the ethical framework entirely.
## The Scope of the Problem: Who's Affected and Why
The implications of this practice extend far beyond a handful of tech journalists. Grammarly, which boasts over 30 million users globally, appears to have leveraged a broad database of writer identities—likely scraped from published articles, social media, or other public sources—to populate its "Expert Review" feature. The company essentially created fake expert personas without consent, then monetized those identities through premium subscription features.
What makes this particularly insidious is the mechanism: when Grammarly users encounter a writing suggestion from the new feature, they see it attributed to a named expert, which psychologically increases the likelihood they'll trust and implement the advice. That credibility transfer—from real human expertise to algorithmic output—is precisely what Grammarly was banking on. For writers and journalists, this means their professional reputation becomes a marketing tool without compensation or control.
The technology news 2026 landscape is filled with similar boundary-pushing incidents, but few have been executed so brazenly. Grammarly will keep using authors' identities unless they take action, which means the burden of protecting your own image has fallen entirely on the individual.
## What Grammarly Says vs. What Actually Happened
In its official response, Grammarly acknowledged the controversy but framed the feature as an attempt to provide "better writing suggestions." The company claims it selected names based on writing samples and expertise, essentially arguing the feature was a form of flattery. This explanation crumbles under scrutiny: if Grammarly truly valued these writers' contributions, it would have asked permission first.
The opt-out model Grammarly implemented is particularly telling. Privacy experts note that ethical AI development typically operates on an opt-in basis—you must affirmatively consent before your identity is used. Grammarly inverted this, deploying first and asking for removal later. It's a pattern we're seeing across the tech industry in 2026, where companies push forward aggressively and apologize only when forced. The best Grammarly will keep using advice for users? Check your account settings immediately.
## What You Should Do Right Now
If you have a Grammarly account—whether free or premium—log in and navigate to your account settings. Look for the "Expert Review" feature and related privacy controls. If you don't want your name associated with AI-generated writing suggestions, you'll need to manually opt out. Grammarly will keep using your identity otherwise, so silence equals consent under their current framework.
Beyond Grammarly, this incident should prompt broader reflection about your digital footprint. Review privacy settings on any writing platforms you use, including LinkedIn, Medium, Substack, and Twitter. Consider whether you're comfortable with your published work being used to train AI systems or populate features like this one.
For journalists and professional writers, document everything. Screenshot your opt-out requests. If Grammarly continues using your identity after you've requested removal, you may have grounds for legal action. Several attorneys are already exploring class-action potential related to the Grammarly will keep using guide that the company has been forced to establish.
## Bottom Line
Grammarly's decision to use writers' identities without permission represents a critical moment in the 2026 technology landscape—one where companies will exploit your reputation unless you actively stop them. You have leverage here: log into your account today, find the opt-out option, and protect your name before Grammarly sells more credibility it didn't earn.
Source: theverge.com