Taylor Swift recently filed a series of trademark applications designed to protect the star from AI-enabled impersonations. Swift already holds a wide array of trademarks, but these latest filings, at least one intellectual property firm suggests, serve a new purpose: protecting the timbre and character of her voice itself through what is known as a āsound mark.ā
In two recent filings, posted April 24 by Swiftās company, the celebrity applied to trademark two recordings. In one, she says, āHey, itās Taylor,ā and in the other, āHey, itās Taylor Swift.ā The recordings themselves are not particularly novel, but that is likely beside the point.
āThe concept of protecting sound as a trademark is not new, though it remains relatively rare,ā wrote Josh Gerben, the Gerben IP attorney who spotted the trademarks on the law firmāsĀ website. āHistorically, singers relied on copyright law to protect their recorded music. But AI technologies now allow users to generate entirely new content that mimics an artistās voice without copying an existing recording, creating a gap that trademarks may help fill.āĀ
Gerben added that, in theory, if an AI-generated imitation of Swiftās voice became the subject of litigation, she could argue that uses resembling her registered vocal trademarks infringe on her intellectual property rights.
Gerben surmises that the goal is to protect the sound of Taylor Swiftās voice much like NBC protects its signature chimes. The strategy, which Matthew McConaughey has also pursued, reflects a novel approach for the AI age, though it remains untested in court.
Celebrities are among those most vulnerable to AI-enabled impersonations and broader unauthorized uses of their likenesses. While top artists and actors already face an enduring, whack-a-mole-style battle against fakes, the latest generation of AI models has made producing these imitations unnervingly easy and scalable.
For similar reasons, celebrities, particularly women, are frequently targeted by deepfake operations that use their faces and bodies in nonconsensual pornographic imagery. Swift herself has been subjected to such campaigns, including in early 2024, when illicit AI-generated images of her spread widely on platforms like 4chan.
In response, and for better or for worse, celebrities are racing to install guardrails of the AI ageāor at least, trying to figure out how to build them.Ā
Swiftās attempt to protect herself from AI via sound marks is only the latest example. In 2024, OpenAI paused the rollout of a ChatGPT voice that closely resembled Scarlett Johanssonāsāand, in an especially recursive twist, her performance as the chatbot in Herāafter Johansson publicly criticized the company for allegedly imitating her voice. (OpenAI has said it used a different actor for the feature.)
In another example, the family of Martin Luther King Jr. pressured OpenAI to remove likenesses of the civil rights leader from its video generation platform, Sora, before it was shut down.
And, no doubt under pressure from talent agencies, YouTube recently said that it would expand its deepfake detection service to Hollywood, and celebrities will now have the option to request that certain videos featuring AI generations of them be.Ā
āWith support from leading talent agencies and management companies, including CAA, UTA, WME, and Untitled Management, weāve worked to refine how likeness detection can best serve talent,ā the platform said in a statement. āWeāre excited that celebrities and entertainers are now eligible to access this tool, regardless of whether they have a YouTube channel.ā
In a market where appearance and likeness are everything, AI presents, at minimum, a new annoyance for artists seeking control, including financial control, over how their face and voice are used. That tension will likely continue to frustrate celebrities. Last year, more than 400 Hollywood leaders wrote to OpenAI and Google opposing the use of copyrighted work to train models without permission.
Itās notable that celebrities are pushing for protections against some of AIās most noxious abuses. What remains unclear is whether those protections will extend to the rest of us, who also face the growing risk of digital impersonation, or simply allow the Hollywood elite to opt out of a new internet increasingly stuffed with endless uncanny mimicry.
Ā