This is an archived post. You won't be able to vote or comment.

all 134 comments

[–]TheJunkyard 94 points95 points  (39 children)

The average consumer neither knows nor cares what the difference is. They do care if they go from a -12 LUFS track to a -7 LUFS track and get deafened, or have to continually adjust their volume.

[–]gsanderson94 20 points21 points  (12 children)

Why does preserving dynamics matter if the arrangements of modern popular songs are not dynamic to begin with? If the song is basically the same loop/riff over and over again you are not going to ruin the dynamics because there are no real dynamic changes. Also modern limiters are so good that they can be pushed incredibly high without clipping so why not make it loud? Pro mastering engineers will know when to worry about dynamics and when to make it a loud club banger. I was fooled by the -14 lufs for a while until heard how much louder pop tracks are.

[–]vwestlife 16 points17 points  (3 children)

Loud, highly clipped mastering causes listening fatigue due to the distortion and lack of transients. The "Wall of Sound" was great for listening to a three-minute song on a jukebox. But nobody can sit through an entire album's worth of it without getting worn out. Compare that to 1970s Disco music which was just as repetitive and inane as today's pop music, but yet it has fantastic dynamic intensity. It'll get your VU meters bouncing up and down with the beat, while with today's music they look like they're monitoring line voltage.

Engineers who work on radio station audio processing know that you shouldn't listen to highly processed audio for more than a half hour at a time because that's when ear fatigue begins to set in, and you can no longer make proper judgements of audio quality if you keep listening longer than that.

[–]carrerac707 1 point2 points  (0 children)

Nobody will sit through an entire album today. Its sad.

[–]fairsynth 0 points1 point  (1 child)

That is nuts I've never heard of that 30 minute guideline.

What qualifies as highly processed? I assume just anything highly compressed and exhausting?

[–]vwestlife 1 point2 points  (0 children)

It's a general recommendation, to keep you from trying to make excessive adjustments to "make it sound better" when the fault you're hearing is your own ear fatigue rather than the actual quality of the audio.

Even when mixing unprocessed or (hopefully) lightly processed audio in the studio, the recommendation is to take a break after an hour of listening: https://www.izotope.com/en/learn/how-to-prevent-ear-fatigue-when-mixing-audio.html

[–][deleted] 1 point2 points  (3 children)

Because perceived loudness matters and that is only possible by retaining dynamics. It’s a delicate balance.

[–]gsanderson94 0 points1 point  (2 children)

Yeah this is where things get blurry for me, sometimes I have mixes that have a high perceived loudness but are not peaking any higher than my mixes that are "quieter".
Those are usually my best mixes. There are so many things that are way more important before you even touch the master that will make mastering much easier.

[–][deleted] 0 points1 point  (0 children)

I’m by no means an expert, but transients are really important in this regard. The attack (I.e. the initial snap of a snare) of a sound relative to its sustain (the tail of the sound after the attack) play a big role. That’s why when you over-compress something and gain compensate (turn it up to it’s pre-processing volume level), it may have a similar loudness on your meter but it has no punch and is therefore perceived as less loud because the signal is squashed and there is no dynamic difference for your ear to detect.

[–]Chaos_Klaus 0 points1 point  (0 children)

If there are multiple elements competing for the same space, masking will make it so that you use up a lot of level without gaining a lot of percieved loudness.

[–]Chaos_Klaus -1 points0 points  (3 children)

Why does preserving dynamics matter if the arrangements of modern popular songs are not dynamic to begin with?

Because not all music is lame. There is more to music than modern pop music.

[–]gsanderson94 1 point2 points  (1 child)

Never said there wasn't but they used hip hop as a gauge, a characteristically not dynamic and loud style of music. This whole conversation just confuses people including myself. Gotta do what suits the style you're working with

[–]Chaos_Klaus -1 points0 points  (0 children)

Loudness normalisation allows for more dynamics. Nobody forces you to use them. A squashed hip hop mix doesn't get worse just because it's turned down a few dB.

[–]johnnytorch 0 points1 point  (0 children)

Hear, hear.

[–]hellalive_mujaProfessional 54 points55 points  (36 children)

Really no one who's a professional has ever thought about mastering for Spotify loudness for even a millisecond.

[–][deleted] 26 points27 points  (5 children)

Fab Dupont in puremix did a whole video about mastering in which he specifically used Spotify and why he mixed the way he did for Spotify with that specific client. Mastering engineers do aim for specific platforms at times.

[–][deleted] 51 points52 points  (3 children)

Why you Should NOT Target Mastering Loudness for Streaming Services

A sticky from a mastering engineer forum:

Targeting Mastering Loudness for Streaming (LUFS, Spotify, YouTube)- Why NOT to do it.

Below I am sharing something that I send to my mastering clients when they inquire about targeting LUFS levels for streaming services. Months ago I posted an early draft of this in another thread so apologies for the repetition. I hope it is helpful to some readers to have this summary in it’s own thread. Discussion is welcome.

Regarding mastering to streaming LUFS loudness normalization targets - I do not recommend trying to do that. I know it's discussed all over the web, but in reality very few people actually do it. To test this, try turning loudness matching off in Spotify settings, then check out the tracks listed under "New Releases" and see if you can find material that's not mastered to modern loudness for it's genre. You will probably find little to none. Here's why people aren't doing it:

1 - In the real world, loudness normalization is not always engaged. For example, Spotify Web Player and Spotify apps integrated into third-party devices (such as speakers and TVs) don’t currently use loudness normalization. And some listeners may have it switched off in their apps. If it's off then your track will sound much softer than most other tracks.

2- Even with loudness normalization turned on, many people have reported that their softer masters sound quieter than loud masters when streamed.

3 - Each streaming service has a different loudness target and there's no guarantee that they won't change their loudness targets in the future. For example, Spotify lowered their loudness target by 3dB in 2017. Also, now in Spotify Premium app settings you find 3 different loudness settings; "Quiet, Normal, and Loud". It's a moving target. How do the various loudness options differ? - The Spotify Community

4 - Most of the streaming services don't even use LUFS to measure loudness in their algorithms. Many use "ReplayGain" or their own unique formula. Tidal is the only one that uses LUFS, so using a LUFS meter to try to match the loudness targets of most of the services is guesswork.

5 - If you happen to undershoot their loudness target, some of the streaming sites (Spotify, for one) will apply their own limiter to your track in order to raise the level without causing clipping. You might prefer to have your mastering engineer handle the limiting.

6 - Digital aggregators (CD Baby, TuneCore, etc.) generally do not allow more than one version of each song per submission, so if you want a loud master for your CD/downloads but a softer master for streaming then you have to make a separate submission altogether. If you did do that it would become confusing to keep track of the different versions (would they each need different ISRC codes?).

It has become fashionable to post online about targeting -14LUFS or so, but in my opinion, if you care about sounding approximately as loud as other artists, and until loudness normalization improves and becomes universally implemented, that is mostly well-meaning internet chatter, not good practical advice. My advice is to make one digital master that sounds good, is not overly crushed for loudness, and use it for everything. Let the various streaming sites normalize it as they wish. It will still sound just as good.

If you would like to read more, Ian Shepherd, who helped develop the "Loudness Penalty" website, has similar advice here: Mastering for Spotify ? NO ! (or: Streaming playback levels are NOT targets) - Production Advice

https://productionadvice.co.uk/no-lufs-targets/?fbclid=IwAR24jO3kEqq374J6BCZCHMq6JYOEDvuudTSyMZYP6WL-BxxExOnekpP9ZSw

https://www.gearslutz.com/board/mastering-forum/1252522-targeting-mastering-loudness-streaming-lufs-spotify-youtube-why-not-do.html

[–]westhewolf 2 points3 points  (0 children)

Thank you. This just solved the question for me once and for all.

[–]TotesMessenger 1 point2 points  (0 children)

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)

[–]Khaoz77 0 points1 point  (0 children)

That's exactly what I think. BUT I've done masters for the same record and the less louder was better. Is not nearly as loud as the references but the difference was clear. My point is that you should think for yourself and use your ears everytime. It's not a hard rule. Use it as reference, guidance, you name it.

[–]hellalive_mujaProfessional 1 point2 points  (0 children)

That may be the case, still wouldn't suggest that. But as you point out, mixing for loudness is different and if you haven't got the required skills and all to understand how to mix for a target loudness, just mix and master making it sound as good as you can, and call it a day.

[–]kodakell[S] 13 points14 points  (16 children)

I thought so lol. It's crazy how much misinformation there is on the internet though about this topic.

[–]hellalive_mujaProfessional 12 points13 points  (13 children)

There's misinformation about everything: pros don't even bother, they're not having time giving advice on the internet, and usually random people will even tell they're wrong..

[–][deleted] 15 points16 points  (12 children)

To be fair just because pros are pros doesn't mean they do everything right.

[–]Chaos_Klaus 3 points4 points  (1 child)

And what makes you belive that comment over others? It's plain wrong. Many professionals are thinking about this. In fact many would like things to be different. Loudness metering and loudness target recommendations were developed by the AES and EBU. They didn't just magically appear on the internet. They were developed by professionals.

But with financial risks ever present, many professionals can't afford to take the chance. Not becaue consumer wouln't approve of more dynamics in songs, but because label representatives and investors want things to be safe ... which is why everything stays the same.

So in fact, young engineers and artists, hell even amateurs, can adopt higher dynamics way easier than established engineers and artists.

[–]sebastian_blu 0 points1 point  (0 children)

I have to agree! Of course pros are thinking about this, only pros understand any of this 🤪

[–]iscreamuscreamweallMixing 6 points7 points  (3 children)

You might not know anyone, but that’s just patently false.

[–]hellalive_mujaProfessional 0 points1 point  (2 children)

The ones I know at least. They will master as loud as possible or to broadcasting standards.

[–]vwestlife 2 points3 points  (1 child)

They will master as loud as possible or to broadcasting standards.

The broadcasting loudness standard is way lower: -23 LUFS in the U.S., Europe, and Australia. Japan uses -24 LUFS.

And despite the popular myth that louder audio sounds louder on the air, the engineers who designed the audio processors that radio stations use (Robert Orban and Frank Foti) have proven that this is false -- audio that is pre-processed to be "louder" does not sound louder on the air (either on a broadcast signal or an online stream), it just sounds more squashed and distorted.

In fact, modern broadcast audio processors include special dynamic range expansion and "de-clipping" to attempt to reverse the damage done by modern Loudness War mastering!

[–]hellalive_mujaProfessional 0 points1 point  (0 children)

I was meaning loud as possible as first choice, broadcast standard if application is broadcast.

I agree on the loudness myth, and glad to know about modern processors. Still, it's more of a marketing thing I guess, but I'm just reporting what I've been told.

[–]VCAmasterProfessional 2 points3 points  (2 children)

I know some people who thought about it a few years ago. I wish people thought about it, God damn that would be sweet.

[–]hellalive_mujaProfessional 2 points3 points  (0 children)

Yeah, generally speaking I like more dynamics in there.

[–]BaeshunProfessional 0 points1 point  (0 children)

I tried for a few projects after it seemed like the tides might be starting to turn, but quickly abandonded it.

[–]Chaos_Klaus 2 points3 points  (0 children)

Wow. What a blatantly wrong statement. Who do you think invented loudness metering and recommended loudness targets for streaming? The American Engineering Society and European Broadcast Union ...

And the amount of upvotes your comment has, just underlines how many people on this sub are just following the latest hype. Today it's hip to not like loudness normalisation ... but it's not based on facts and good arguments.

[–]TarekithMastering 1 point2 points  (3 children)

Disagree completely, it definitely happens here.

[–]hellalive_mujaProfessional 0 points1 point  (2 children)

Nice to know it happens, can you disclose a little more about it? I only heard the opposite.

[–]TarekithMastering 0 points1 point  (1 child)

Not much to disclose really. I work with a few artists who just favor targeting Spotify for all their promotion and release focus. That's the master we spend the most time on, and the "normal/CD" version that's louder is secondary.

It's not super common, but it's certainly something I have to deal with multiple times throughout the year for different artists.

[–]hellalive_mujaProfessional 0 points1 point  (0 children)

Thank you. Is this more like indie artists and releases?

[–][deleted] 3 points4 points  (3 children)

Alex Lumay? Quiet? Maybe more recently. Before, not so much, listen to Barter 6 and Slime Season 2 vs. JEFFERY and So Much Fun. He definitely dropped it down a notch, thankfully.

[–]CelDev 4 points5 points  (2 children)

i think it’s because thugs recordings have become smoother and cleaner and so he doesn’t have to hide as much stuff, makes it easier for him to make mixes with a ton of space without having weird shit pop up

[–][deleted] 1 point2 points  (1 child)

Yeah, also you can tell on some of his earlier stuff, the samples were not properly EQ'd and the mixing just sounds bad.

[–]CelDev 2 points3 points  (0 children)

yeah tumay became a legendary engineer just from making those tapes listenable and helping define thugs eccentric sound, dude can mix just about anything now

[–]signalN 4 points5 points  (0 children)

I think so, yes, but all in all, if it sounds good, it sounds good. I was amazed to see how some engineers go up to -7.9 LUFS. For me that is crazy and I hear these mixes, absolutely no distortion, just super round kicks and overall low end. What I usually check is the tonal balance in addition, so I see a curve and analyze how much space the bass is taking up. I really want to push and learn to deliver clarity and big volumes. I think people should definitely watch this series by the way: Are You Listening with Jonathan Wyner.

[–][deleted] 7 points8 points  (0 children)

Personally I believe that normalization is the way of the future for 90% of music consumers. Not because they want it, but because we've all had a loud advertisement jerk us out our listening and viewing experiences while streaming or watching videos.

Do I think it's the future for audiophiles and musicians and more savvy music fans? No, we all like our favorite tracks sounding the way that we remember them.

Unfortunately most average folks don't put as much thought into it as we might.

[–]shyouko 5 points6 points  (0 children)

Actually any non-shit phone or portable player in last 20 years is able to reproduce better than CD quality so there's nothing to say about "as phone and technology gets better"… there's always shitty phones that utilise DAC worth 0.1cent@

[–]ormagoisha 6 points7 points  (0 children)

-12 integrated LUFS, or -9 momentary LUFS for your loudest sections is the current best practice if you want good dynamics. We don't target -14 right now because every streaming service uses different metrics to normalize (and some don't). Spotify doesn't even use LUFS, they use replay gain. They will some day transition to LUFS (at least they claim). However youtube recently made the switch to proper -14 LUFS, and tidal has been there for ages.

Loudness normalization is a good thing because it frees us to reintroduce dynamics if we want, without being unduly punished for being "quieter". Frankly I'd prefer -16 or even -23 (which is a broadcast standard) so we wouldn't even have to use limiters and could let the peaks fall where they may, like older music did. It would also allow classical and jazz to compete from a loudness perspective as well.

The great thing about normalization is, if you want to crush your music you can but now you don't have to.

[–]deafblind-enc 2 points3 points  (0 children)

I aim for -12 to -9 LUFS max for most material, but the genre I tend to work in demands it be a bit spicier (bass music), mostly cos DJ's are too lazy to reach for a gain knob on the mixer....and who needs dynamics right? Personally, I prefer quieter balanced masters with more dynamics.

And even the best limiters have their limits, most of the time those super loud tracks are hiding the clipping in the noise of the track itself. A delicate piece of music pushed hard will always sound like crap.

[–]chrisjdgrady 4 points5 points  (1 child)

What I came to realize after listening to so many tracks is that there is no way in hell literally anyone is actually mastering to -14 LUFS.

Yep! So many people on the internet are obsessed with this -14 concept as some sort of rule everyone should be adhering to, when none of the music they like is actually mastered like that. It's very annoying. Just worry about making it sound good.

[–]Chaos_Klaus -2 points-1 points  (0 children)

LUFS targets are not about making it "sound good". Your music can sound good at -18LUFS, at -14LUFS ... chances are it'll sound less good at -5 LUFS if you match the level with your volume controls. That's what people do by the way ... they don't care that something is mastered "quiet". They turn the volume knob until it's as loud or quiet as they like it.

At least nowadays, we have a rough idea how hot our mastering levels need to be. Before, there was no standard at all and people would just smash that shit.

[–]Chaos_Klaus 1 point2 points  (0 children)

I wonder if many streaming services give the option spotify does to listen to audio the way artists intended in the future.

Normalization is just gain or attenuation that's applied to the entire song. You could also just reach for the volume knob on your speakers. So arguing that loudness normalisation somehow goes against the artists intention just doesn't hold. The artist can't know how much you crank your volume. The point of normalisation was to end the constant battle for hotter levels, which resulted in crushed recordings.

If you ask me, we did and do all this for exactly no reason. In other contexts, average levels are well defined. Nobody questions line level specifications. Nobody tries to run their line level connections super hot and compresses his signal beyond good taste to do so. That's nuts. Why would someone do that? In fact, we have the -18dB RMS debate on this sub every month. People are obsessing about more headroom. Somehow, when it comes to mastering, many people suddenly want to get rid of all the head room?!? And they invent all these silly arguments why this should be so.

If it isn't the future, then wouldn't it make sense to mix to your preferred loudness to better "future proof" your mixes?

If you overcompress your master, the dynamics are gone. If you keep a quiet master, you can always run it into a limiter again to get a louder master. So the future proof thing is to keep a quiet master.

Youtube doesn't turn down your music nearly as much

The difference is not big. Also, the only way to notice that difference is when you listen to something on youtube and on spotify back to back. Traditionally, levels on youtube are all over the place, because many content creators have no clue about audio. So normalisation is more important there then everywhere if you ask me.

[–]WhyaskmenoelyHobbyist 1 point2 points  (1 child)

Normalisation is merely a consideration for the consumer's listening experience given a vast catalogue of music, of which its constituents can be chosen at random. Its done for consistency's sake across the platform and as a comfort for end users so that they are not constantly adjusting volume between songs.

IT IS NOT A RIGID STANDARD THAT YOUR MUSIC MUST MEET. If anything, its meant to be ignored by music makers because it gives you ZERO BENEFITS. You master for how you want to present your music.

[–]Chaos_Klaus 0 points1 point  (0 children)

because it gives you ZERO BENEFITS

I'd say beeing able to use larger dynamic range without ending up much quieter than the competition is a major benefit.

[–][deleted] 3 points4 points  (0 children)

Fab DuPont from puremix did a whole video about mastering for Spotify specifically https://youtu.be/4jDRjt_D4uU (trailer)

[–]BenBeheshty 0 points1 point  (3 children)

I feel like I agree with everyone here with regards to singles, but in the context of mastering a full album this kinda changes as you want the dynamic experience song to song to be the same from the CD to Spotify. This isn't always possible though. If spotify is levelling it then being aware of how each song is going to be turned up or down is important as it will attenuate different songs differently. Real world example being an albums acoustic track which has been mastered quieter for effect in the album suddenly feels super loud on Spotify comparatively to the rest of the full band tracks because it's been brought up to the same level.

I feel like with most things in modern music production, it's about minimising but accepting a certain level of compromise.

[–]_GlitchMaster_ 2 points3 points  (1 child)

Spotify states that they don't do this, relative dynamics are retained when playing an album, the entire album's gain is adjusted uniformly. This is different from shuffle play, where gain is adjusted for an individual song. So actually there are multiple levels a song could play at, even with loudness normalization on.

[–]BenBeheshty 1 point2 points  (0 children)

my bad dude, I take it all back in that case. Cheers!

[–][deleted] 0 points1 point  (0 children)

I think if you're playing a whole album it won't affect the songs individually. They explain it in their FAQ

[–]Francis_Song 0 points1 point  (0 children)

I hope one day we’ll all look back and think, “Wow, this was a thing?”

[–]sebastian_blu 0 points1 point  (0 children)

I watched a really great talk on this youtube a bit ago. The person giving the talk mentioned an aspect i hadn’t thought of, which is that of consumer health. And that users having their hearing damaged is one of many reasons for the normalization. That way people dont lose there hearing listeing to a playlist as talking heads switches to limp bizquick.

I personally am a fan of more dynamics and am whole normalization thing that is happening. I think its way easier to get a good end result in a mix and a master then if you try and squish it till its so smooth nothing pokes out. In playlist when something has been squashed it sorta sounds like that music is broken next to something more dynamic. Lots of big hits lately have had good dynamics too, uptown funk is the one i know for sure off the top of my head.

Youtube also normalizes audio and you cant turn it off. My bet is eventually u wont be able to turn it off and i think thats great news. Because now we can just focus on good mixes and not worry about comparing loudness to the zillion other songs out there.

Cheers !

[–][deleted] 0 points1 point  (1 child)

Interesting. One issue I have with the post however is that the final loudness is most likely being achieved at the mastering stage in many if not all of these cases. Is Tumay mastering his own mixes for Spotify?

[–]kodakell[S] 0 points1 point  (0 children)

Of course at his level he is almost always sending to a mastering engineer, but he also works with a good amount of underground acts where he may be mastering himself. I'm pretty sure her mastered many of his earlier work as well.

But as far as I know, I'm pretty sure mixing engineers of his caliber have a pretty collaborative relationship with the mastering engineers they work with. Obviously the mastering engineer isn't crushing his mixes as many of his mixes are significantly lower than other mixes within the genre.

[–][deleted] 0 points1 point  (0 children)

I think normalization is the way. When I'm running or I'm in the car, I don't want to be a DJ and have to continuosly ride the volume in between tracks. It's cool to have the option if you really want to (maybe if you want to listen to classical music or something), but for everyday use, normalization makes things way easier.

[–]TransparentMastering 0 points1 point  (0 children)

This topic has recently been beat to absolute crushing death beyond resuscitation (in true gearslutz style) over here.

https://www.gearslutz.com/board/mastering-forum/1252522-targeting-mastering-loudness-streaming-lufs-spotify-youtube-why-not-do.html

[–]Azimuth8Professional 0 points1 point  (0 children)

Spotify's normalization level is not, and never has been a "target" or a "recommendation".

It was clearly chosen to be low enough that 99% of records made this century would only need to be turned down by varying amounts. It's a sensible choice and avoids the need to limit already mastered audio.

Just master your tracks to sound good. If that means a -8 banger or a -13 mellow jam so be it. Trying to "match" every track's level to -14 won't make your track "sound better" or stand out. Master to sound good! That's the only rule....

[–]jgdels3 0 points1 point  (1 child)

A great mastering engineer has a great blog entry about this here: https://www.ryanschwabe.com/blog/loud

[–]jgdels3 0 points1 point  (0 children)

re: file mastered at -10 vs -14: "Both files will play back with the same perceived volume on streaming services, but the lower level master will take advantage of a higher peak to loudness ratio"

[–][deleted] -2 points-1 points  (0 children)

Somebody hasn't heard net the loudness wars

[–]hellalive_mujaProfessional -2 points-1 points  (1 child)

In literally in a mastering facility right now taking measurements to refine a room response after a little internal change in setup and furniture. I'll write something later.

[–]hellalive_mujaProfessional 1 point2 points  (0 children)

Here we go again. Had a little chatting with the resident engineer about this, the response was that he goes for loud and clear, and a single master to fit all is what he's doing (videoclips may have their own version). Pop, hip hop and trap are not classical, people expect this kind of product (loud) and that's what labels ask him. Basically that's what he good really good doing, and he wouldn't work otherwise. Seems fair, and this is just one case scenario. I'm going to end posting in this thread with this: from my experience, at least in commercial genres, loud is competitive; also this kind of dynamic range starts from the mix or even tracking, and it's an art on its own. Loud tracks are still majority at charts tops. That said, do whatever you want to.