Note Direct answer: Judge ROI by cost per save and follow-through behavior, not CPM alone. A cheaper impression channel is not the winner if those listeners do not save, return, or convert into repeat streams.
This page gives planning benchmarks for label marketers, managers, and performance teams that need realistic targets before launch. Use these as starting bands, then replace them with your own account-level baselines after 2-3 campaign cycles.
Note Source dataset: Streaming royalties RPM dataset. License and attribution terms: Data License.
Benchmark bands by funnel stage
| Funnel stage | Primary metric | Conservative working range | What "good" usually means |
|---|---|---|---|
| Awareness | CPM | $5-$20 | Stable reach without frequency burn |
| Consideration | CPC / CPV | $0.30-$1.50 click, $0.02-$0.10 view | Creative keeps attention long enough to qualify traffic |
| Conversion | Cost per save | $0.30-$1.20 | Saves that sustain beyond release week |
| Retention | 7-day repeat rate | 15%-35% | New listeners returning without heavy paid support |
Platform-level planning ranges
These are broad planning bands. Actual results shift by territory, genre, creative quality, and campaign objective.
| Platform | Typical CPM band | Typical click/view cost band | Best use case |
|---|---|---|---|
| Meta | $8-$25 | $0.40-$1.50 click | Retargeting and conversion intent |
| TikTok | $4-$15 | $0.30-$1.20 click | Top-of-funnel discovery and creator momentum |
| YouTube | $4-$18 | $0.02-$0.10 view | Story depth plus scalable video reach |
| Spotify ads | $8-$25 | CPCV/CPC model dependent | In-platform listener attention |
Warning A low CPM with weak conversion is often a hidden loss. Always read top-of-funnel cost together with cost per save and repeat listening trend.
Cost-per-save reality checks
| Signal | Interpretation | Recommended action |
|---|---|---|
| Under $0.50 | Usually strong campaign economics | Scale carefully and protect creative quality |
| $0.50-$1.00 | Often acceptable in competitive markets | Improve landing flow and CTA clarity before scaling |
| $1.00-$1.50 | Borderline efficiency | Narrow audiences and refresh hooks |
| Over $1.50 | Usually inefficient unless LTV is unusually high | Rebuild creative/targeting before adding spend |
For cross-platform sequencing that improves these numbers, see the cross-platform campaign strategy.
Scenario-based benchmark usage
Scenario 1: strong views, weak saves
You likely have awareness creative that does not hand off to conversion. Fix the bridge first, especially link flow and save-first CTA language. This is where the TikTok to Spotify conversion guide is usually the fastest fix.
Scenario 2: decent saves, weak repeat listeners
You are buying first actions but not building habit. Improve post-save experience, artist profile context, and post-release content sequencing.
Scenario 3: good conversion in one territory only
Do not flatten budget globally. Split winning markets into dedicated campaigns and scale where economics are already proven.
Practical ROI model for planning meetings
| Question | Baseline assumption to start with |
|---|---|
| How much budget should we test before scaling? | 20%-30% of total campaign budget |
| How long before we trust signal quality? | 3-7 days, depending on volume |
| What is a fair early success marker? | Stable cost per save plus improving repeat-rate trend |
| When should we cut a creative angle? | If cost per save is materially worse than campaign median after enough spend |
Map this into your release planning so benchmarks drive timing decisions: music release marketing timeline.
Reporting template that stakeholders can use
| KPI | Target band | Current | Direction | Decision |
|---|---|---|---|---|
| CPM | $5-$20 | |||
| CPC / CPV | Channel-dependent | |||
| Cost per save | $0.30-$1.20 | |||
| Follow rate | Campaign-specific | |||
| 7-day repeat rate | 15%-35% |
The "Decision" column is the important one. Reporting that does not trigger action is noise.
Dynamoi campaign ROI: ad spend vs. royalty recovery
The ultimate ROI question for music advertising is whether the royalties generated by acquired listeners offset the ad spend. Dynamoi's first-party data provides the royalty side of the equation.
Platform RPM (Dynamoi first-party streaming data)
| Platform | RPM (per 1,000 streams) | Relative value |
|---|---|---|
| Amazon Music | $9.02 | 3x Spotify |
| YouTube Art Tracks | $5.28 | Country-dependent: DK $8.56, US $7.10, AU $7.53, UK $5.96 |
| Spotify | $3.02 | Largest volume; algorithmic amplification drives LTV |
| YouTube Content ID | $1.57 | Passive UGC monetization |
Worked example: Spotify save campaign ROI
| Variable | Value |
|---|---|
| Ad spend | $500 |
| Cost per save | $0.75 |
| Saves acquired | 667 |
| Avg streams per saver (12 months) | 150 |
| Total streams generated | 100,050 |
| Spotify RPM | $3.02 |
| Royalty revenue (12 months) | ~$302 |
| Net cost after royalty offset | ~$198 |
In this scenario the campaign does not pay for itself through Spotify royalties alone, but the effective cost drops to $0.30 per save once royalties are factored in. Higher-RPM platforms shift the math further: YouTube campaigns targeting US viewers at $7.10 AdSense RPM can approach break-even or profitability on ad spend alone, particularly for catalog content that generates views for months after the campaign ends.
For full platform royalty data, see the streaming royalties RPM dataset. For YouTube RPM broken down by 41 countries, see the YouTube RPM by country page.
Common benchmark mistakes
- Treating every campaign objective as directly comparable
- Scaling off one-day performance spikes
- Optimizing to click cost while ignoring save quality
- Using global averages where market-specific baselines are needed
Bottom line
Use benchmarks to set guardrails, not to declare victory. The most reliable operating pattern is simple: test with discipline, scale only where conversion quality holds, and update your benchmark sheet after every release cycle.
