Dynamoi News

Deezer Exposes 70% Of AI Music Streams As Fraudulent

Music platform's detection tool reveals massive bot-driven scheme siphoning royalties from legitimate artists through AI-generated tracks.

Trevor Loucks

Edited By Trevor Loucks

Founder & Lead Developer, Dynamoi

Published

Vibrant editorial photograph for music industry article

Deezer dropped a bombshell this week: 70% of streams from AI-generated music on its platform are fraudulent, driven by bots rather than real listeners.

The French streaming service's proprietary detection tool has become the industry's first line of defense against what executives call "musical money laundering."

Why it matters:

30,000 AI tracks flood Deezer daily—and most exist solely to game royalty systems.

The discovery exposes a systematic attack on streaming economics that diverts millions from legitimate artists. AI-generated content currently represents just 0.5% of total streams but fraudsters are using bot armies to artificially inflate play counts.

"The only thing that we didn't really find is some kind of emergence of organic, consensual consumption of this content," Manuel Moussallam, Deezer's head of research, told NPR.

By the numbers:

  • Daily AI uploads: 20,000 tracks per day, up from 10,000 in January
  • Growth rate: 18% of all new uploads are fully AI-generated, double from 10% three months ago
  • Revenue dilution: Less than 1% of legitimate streams come from real people listening to AI content
  • Detection capability: 100% detection rate for major AI models like Suno and Udio

The system:

Deezer's tool identifies AI signatures across multiple generators without requiring specific training datasets. The company filed two patents in December 2024 for unique detection methods that can adapt to new AI models.

Once detected, AI tracks are excluded from algorithmic recommendations and fraudulent streams are removed from royalty calculations.

Detection challenges

While Deezer leads the pack, the tool focuses primarily on waveform-based generators and can only detect songs created by certain tools, meaning detection can be bypassed.

Other platforms remain vulnerable. Spotify is not currently taking steps to label AI-generated content, despite CEO Daniel Ek's public support for AI in music creation.

Industry impact:

The fraud scheme mirrors traditional streaming manipulation but with a twist: AI enables rapid content creation at scale.

A North Carolina man was charged with stealing $10 million in royalties using AI-generated songs and bot networks, proving the financial stakes involved.

Platform responses

Spotify: Takes a hands-off approach, stating "Spotify doesn't police the tools artists use in their creative process"

YouTube Music: Focuses on deepfake voice detection through Content ID

SoundCloud: Prohibits monetization of exclusively AI-generated content

What's next:

Deezer's transparency push puts pressure on competitors to implement similar systems. CEO Alexis Lanternier called it "an industry-wide issue" requiring coordinated response.

The company plans to develop "a remuneration model that distinguishes between different types of music creation" while maintaining its artist-centric payment approach.

Regulatory implications

Experts suggest government intervention may be necessary. UC Berkeley's Hany Farid compared the situation to food labeling requirements: "We're simply informing you" about content origins.

The bottom line:

Deezer's findings reveal AI music fraud as a revenue threat, not just a creative concern. With detection tools proving their worth, streaming platforms face pressure to implement transparency measures or risk becoming unwitting accomplices in musical money laundering.