Ian Shepherd
banner
ianshepherd.bsky.social
Ian Shepherd
@ianshepherd.bsky.social
Audio, video but mostly music. Mastering engineer, http://LoudnessPenalty.com, Perception AB & Dynameter plugins. Podcast http://themasteringshow.com
Yeah people complain about edge cases in music too, but I’m not sure a significant improvement is possible without (eg) genre detection
November 16, 2025 at 11:00 AM
I definitely think it’s better than nothing, but yes - I’ve been disappointed by the lack of impact the normalisation has had - although I do think we’ve pulled back from the worst of it
November 16, 2025 at 12:59 AM
Fair comments. I don’t find momentary helpful either - short-term is the most useful for music for me
November 16, 2025 at 12:57 AM
I get that, but it’s not a suggested standard, just their compromise level between playback volume and flexibility for normalisation.

What I find is that everyone still maxes everything out rather than taking the opportunity to master things at whatever level works best for the music 😕
November 15, 2025 at 10:21 PM
Agreed ! But it’s enabled by default and most music fans don’t change that. Even less on Spotify and nobody at all on YouTube.

That’s the ironic thing, most people in music production are listening to something different than the majority of music fans
November 15, 2025 at 8:17 PM
Fair enough - I don’t advocate aiming at -14 (or any other integrated number) but I also don’t find high LUFS a good solution - it’s the increase in density that causes the change in sound, but I think there are more effective strategies to achieve that
November 15, 2025 at 7:53 PM
Well, I haven’t dug in that deeply but I’m pretty sure they used a shelf because most signals just don’t have content up there.

Either way I totally get that it isn’t perfect, but it’s pretty good given the fairly crude model.
November 15, 2025 at 7:49 PM
Agreed but the standard actually works pretty well for music as well, and is widely used. But there’s a lot of confusion about how it works and the best way to implement it
November 15, 2025 at 2:39 PM
Ugh 😕

They’re taking down sides and rear in a surround stream ? Or do you just mean in a stereo downmix ?
November 15, 2025 at 2:22 PM
I didn’t understand this to begin with but you’re normalising to -6 dBTP, is that right ? Do you not have to hit an integrated LUFS value these days ?
November 15, 2025 at 2:20 PM
(Spotify use a limiter to lift songs below -11 LUFS if a subscriber chooses the “Loud” option in the preferences, but that’s a really small proportion of users afaict)
November 15, 2025 at 2:16 PM
I can imagine, didn’t realise they would change the final mix of a show for broadcast 😕 What’s the point of standards in that case ? Or are you talking about auto-DRM like Dialnorm ?

For music streaming there is no dynamics processing, thankfully - just normalisation.
November 15, 2025 at 2:14 PM
…and try to answer that question with integrated LUFS values.

I think that’s misguided, (although I do think short-term loudness can be a useful metric) but there are lots of people saying “master to -8” or
-14 etc and I was curious to hear what people here think
November 15, 2025 at 1:43 PM
Yeah dialogue intelligibility is interesting. When I’ve looked into it it’s rarely been about LUFS and more to do with EQ, relative levels with FX & music, and often the actor’s diction…
November 15, 2025 at 12:54 PM
Interesting point
November 15, 2025 at 12:51 PM
Well maybe, but that’s an edge case for most music. In fact peaking sine of any frequency is pretty damn loud, and deserves to be measured as such, doesn’t it ?
November 15, 2025 at 12:50 PM
Interesting, I haven’t found that particularly
November 14, 2025 at 7:54 PM
This is the way 👍
October 23, 2025 at 7:31 PM
…provided you’re at -14 or higher.

Having said that, the process of achieving a higher LUFS sometimes influences the sound in a way that lower levels don’t.

Not enough words, hopefully that blog post will help !
October 23, 2025 at 7:26 PM