ThisDayInAI
--:--:--
Today's Gold — Day's Top Story

Grammarly Hit With Class-Action Lawsuit for Using Journalists' Identities in AI Feature Without Consent

Journalist Julia Angwin has filed a class-action lawsuit against Grammarly's parent company Superhuman after discovering the platform used hundreds of real writers' names and identities for AI-generated editing suggestions without permission.

Grammarly Hit With Class-Action Lawsuit for Using Journalists' Identities in AI Feature Without Consent

Grammarly Hit With Class-Action Lawsuit for Using Journalists' Identities in AI Feature Without Consent

Journalist Julia Angwin has filed a class-action lawsuit against Superhuman Platform, Inc., the parent company of Grammarly, alleging the company violated privacy and publicity rights by using the names and identities of hundreds of real writers in its AI-powered "Expert Review" feature — all without their knowledge or consent.

The Feature That Sparked the Firestorm

For months, Grammarly had been offering a premium feature called "Expert Review," which presented AI-generated editing suggestions as if they came from real-world authorities. Users would see advice attributed to prominent figures like Stephen King, Neil deGrasse Tyson, and other well-known writers and academics — none of whom had agreed to participate.

The deception was uncovered when tech journalist Casey Newton flagged the feature in his Platformer newsletter, prompting others — including staff at The Verge — to test the tool themselves. What they found was striking: the feature surfaced AI-generated suggestions paired with the names, photos, and professional credentials of real people, including The Verge's own editor-in-chief Nilay Patel.

"I found out my identity was being used by way of Casey Newton," Angwin stated in the complaint, which was filed as a class action on behalf of all individuals whose identities were used without consent.

The Lawsuit's Core Claims

The class-action complaint alleges that Superhuman violated laws against using someone's identity for commercial purposes without consent. The complaint targets both privacy rights and publicity rights — the legal principle that individuals have the right to control how their name and likeness are used commercially.

This is particularly significant in the AI era, where companies have increasingly faced scrutiny for using creators' work and identities to train and market AI products. The lawsuit could set important precedent for how AI companies are allowed to leverage real people's reputations.

The Backlash and Grammarly's Response

The public reaction was swift and fierce. Prominent journalists including Casey Newton and Kara Swisher publicly condemned the feature. The outcry forced Superhuman into damage control mode.

Initially, the company launched an email inbox where writers and academics could request to opt out — an approach that many criticized as putting the burden on victims rather than the company. Within days, Superhuman CEO Shishir Mehrotra announced the feature would be disabled entirely.

"The agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward," Mehrotra wrote on LinkedIn.

Broader Implications for the AI Industry

The Grammarly lawsuit arrives at a moment when the AI industry is grappling with fundamental questions about consent, attribution, and intellectual property. While much of the legal battle has focused on training data — whether AI companies can use copyrighted material to train their models — this case opens a new front: the unauthorized use of real people's identities in AI-generated outputs.

The distinction matters. Even if an AI company trains its models legally, using real people's names and likenesses to market or present AI-generated content crosses into territory that existing privacy and publicity laws were designed to protect.

For AI companies building features that reference or invoke real people, the message from this lawsuit is clear: opt-in, not opt-out. Using someone's identity to sell a product has always required consent, and wrapping it in an AI feature doesn't change that fundamental principle.

The case also highlights the growing tension between AI companies' desire to add credibility to their outputs and the rights of the real humans whose reputations they're borrowing. As AI tools become more sophisticated, the temptation to blur the line between genuine human expertise and AI-generated content will only grow — making cases like Angwin's all the more important in drawing clear legal boundaries.

0 Comments

No comments yet. Be the first to say something.