Using AI to clone a real artist's voice without their permission is both illegal under personality rights laws and banned by major streaming platforms. Tennessee's ELVIS Act, effective July 2024, explicitly protects voices from AI cloning. Similar laws exist in California, New York, and other states. Beyond legal risk, Spotify, YouTube, TikTok, and Apple Music will remove unauthorized voice clones.
The core rule: you cannot use AI to replicate someone else's voice without documented authorization.
What Is the Legal Framework for AI Voice Cloning?
Right of Publicity
Voice is a protected element of personal identity under right of publicity laws:
- Your voice is part of your commercial identity
- Using someone's voice without consent can constitute misappropriation
- This applies whether the voice is real or AI-generated
- The test is whether the voice is "readily identifiable" as a specific person
The ELVIS Act (Tennessee)
Tennessee enacted the first law explicitly addressing AI voice cloning:
What it protects: An individual's "voice," defined as "a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation."
Who's covered: Both living individuals and deceased (up to 10 years after death)
Penalties: Civil liability plus criminal penalties up to Class A misdemeanor (up to 11 months incarceration, $2,500 fine)
Key provision: Creates liability for distributing technology whose "primary purpose" involves creating unauthorized voice facsimiles
Other State Laws
Following Tennessee, several states strengthened voice protections:
| State | Protection | Status |
|---|---|---|
| California | Strong right of publicity, AI amendments | Active |
| New York | Digital Replica Contracts Act (2025) | Effective |
| Illinois | Biometric protections applicable | Active |
| Texas | Right of publicity reforms | Introduced |
No federal law exists yet, though bipartisan proposals are circulating in Congress.
What Is NOT Allowed
Cloning real artists' voices:
- Using AI to replicate Drake's voice without permission
- Creating "The Weeknd AI" covers
- Generating vocals that sound like specific recognizable artists
- Using someone's voice samples to train custom AI models
The "Fake Drake" example: In 2023, the AI song "Heart On My Sleeve" featuring AI-generated vocals imitating Drake and The Weeknd was removed from platforms after going viral. This track triggered industry-wide policy responses.
Commercial exploitation:
- Selling AI voice covers
- Distributing voice clones on streaming platforms
- Using recognizable voices in ads or content
Warning The legal standard is whether the voice is "readily identifiable" as a specific person. Even imperfect clones that listeners recognize can trigger liability.
What IS Allowed
Original AI voices:
- AI-generated voices not based on real people
- Synthetic voices from licensed voice libraries
- Your own voice enhanced or modified by AI
Authorized voice use:
- Artists who have licensed their voices (Grimes via Elf.Tech)
- Voices from platforms with explicit creator consent programs
- Collaborations with documented permission
Production uses:
- AI voice effects on your own recordings
- Pitch correction and vocal processing
- Demo vocals meant for replacement
What Are the Platform Policies on AI Voice Cloning?
All major platforms prohibit unauthorized voice cloning:
| Platform | Policy |
|---|---|
| Spotify | Removes "music that impersonates another artist's voice" without permission |
| YouTube | Allows takedown requests for AI voice content without consent |
| TikTok | Prohibits AI vocals creating "false impression" of real person |
| Apple Music | Voice Passport technology scans for unauthorized clones |
Platform removal is often faster than legal action. Rights holders can request takedowns through standard abuse reporting channels.
Which Artists Allow AI Voice Use?
Some artists have explicitly authorized AI use of their voices:
Grimes: Launched Elf.Tech allowing anyone to use her AI voice in exchange for 50% royalties on master recordings
Holly Herndon: Created "Holly+" voice model for community use
Others emerging: Various independent artists exploring voice licensing programs
Using these authorized voices is legal and policy-compliant. Document that you have permission through their official programs.
How to Protect Yourself
- Never clone recognizable artists without explicit written permission
- Use original AI voices or licensed voice libraries
- Check authorization programs if you want to use a specific artist's voice
- Disclose AI voice use where platforms require it
- Keep documentation of any voice permissions obtained
What Is the "Sound-Alike" Gray Area?
What about voices that sound similar but aren't direct clones?
- Explicit imitation: Likely violates policies even if not identical
- Genre-typical vocals: Less likely to trigger issues if not targeting specific artist
- Coincidental similarity: Harder to enforce, but still risky if recognizable
The safer approach: don't deliberately create content that sounds like specific artists.
What Are the Consequences of Violation?
Platform-level:
- Immediate content removal
- Account warnings or suspension
- Loss of monetization
- Reputation damage
Legal-level:
- Civil lawsuits from artists or estates
- Statutory damages under state laws
- Criminal prosecution in Tennessee (ELVIS Act)
- Injunctions preventing further distribution
What Is the Bottom Line?
AI voice cloning technology creates exciting creative possibilities but serious legal constraints. The rule is simple:
- Your voice: Use freely
- Original AI voices: Use freely
- Authorized artist voices: Use with documentation
- Unauthorized real voices: Never
The combination of state laws, platform policies, and industry enforcement makes unauthorized voice cloning one of the highest-risk activities in AI music. The creative benefits don't outweigh the legal exposure for most creators.
