Cubase, LUFS, and YouTube

Hello Cubase users
I’m trying to get YouTube mastering right. I understand that they ask for -3dB and -14 LUFS.
Does this figure of -14 LUFS represent the Integrated Loudness figure?
I’ve applied a Brickwall limiter at -3dB. With that, my track maxes at -10 LUFS, but its Integrated Loudness is only -18.5 LUFS. Should I increase the output and take the limiter up to -1dB to increase the Integrated LUFS. Or does anyone have any suggestions?
I also note that the generally recommended level for broadcast audio LUFFs is -23.
Any ideas?

Yes, YouTube are now at -14LUFS (integrated).
Not sure where you got the -3dB from though.
FWIW, I Master to slightly more than -14LUFS (integrated) peaking at ca -10LUFS with my brickwall limiter set at -1dB. But it’s all rather genre related too…for example: if your music is less energetic in style do you really want it to be turned up as loud as Motorhead?
To find out what the various streaming platforms will do with your tracks there is the excellent MeterPlugs Loudness Penalty plugin and from the same company the free Loudness Penalty website where if you drag a file onto the app it will tell you what the services will do with your Master.

https://www.loudnesspenalty.com/

I think Mnatseah means at the end of the mixing stage for -3 to go for mastering

I think Mnatseah means at the end of the mixing stage for -3 to go for mastering and a limited version of -14

YouTube isn’t broadcast technically so you don’t have to worry about -23LUFS or -24LKFS. Different items.

As for your specific levels: IF the spec says peaks at max -3dBFS then the right solution isn’t to go above that in order to increase your integrated value, the solution is to lower the dynamic range of your signal - either by compressing it and/or by remixing it. You could of course just lift the mix and have your limiter squash the top of it to keep it at -3dBFS, but that’d be an extra 4-5 dB of limiting which could sound nasty, so it’s generally better to just control the lowering of dynamic range yourself during the mix/master.

3 Likes

.Yes, absolutely…just that I don’t think there is a -3dBFS requirement. I don’t even think there’s a peak target, I chose -1 dBFS for safety (and was advised as such by Pro Mastering engineers).

If you aim at -3dBFS that you are possibly setting yourself up to be quieter than you want (I don’t think YouTube turn up as well as down, I know Spotify don’t but Apple do).

1 Like

I doubt that a peak that’s two dB lower results in a significantly lower perceived loudness.

1 Like

Which is maybe why I said possibly. Genre dependent for a start .
Just checked on a few of mine and no, very little perceived loudness.

Then tried some NuMetal…big difference.

1 Like

Thanks to you all for your helpful replies. When @planarchist says

I Master to slightly more than -14LUFS (integrated)

Does “slightly more” mean above or below -14, say, -13 or -15?

@Mnatseah :

Does “slightly more” mean above or below -14, say, -13 or -15?

Sorry, I meant slightly louder so, I tend to aim for -13 to -14LUFS.

I also check using the Loudness Penalty Analyzer I mentioned. I won a copy in a competition and must admit before that I’d thought it a clever but not really necessary tool but I really like it.

Thanks @planarchist. That’s good to know. I’ll try the Loudness Penalty Analyzer.
Thanks for your help, everyone.

1 Like

You already received good advice. I want just to point out how important is what MattiasNYC said
" I doubt that a peak that’s two dB lower results in a significantly lower perceived loudness".

It’s not that much about LUFS, is about perceived loudness much more. I have found in the website of a good mastering engineer-he writes for sound on sound- the advice that LUFS are only a guide, also the -14 LUFS indication. Our ears, and reference tracks are the ultimate guide. I mastered a song for You Tube following all the LUFS recommendations, using even the penalty analyzer… and my master resulted significantly lower than what I wanted. The second attempt was much better because I wasn’t caring too much about LUFS, but I pushed the track till the perceived levels were similar to some music played on You Tube. After I uploaded the track there weren’t surprises this time and my track sounded as loud as I wanted. The best way is to compare your master with reference tracks. Compare your master with the loudness of some you tube tracks. If the perceived loudness of your master is not similar to the one of the tracks you want to emulate, than forget the meters and push your master…Try to get the best sounding possible master, at the same perceived levels of your reference tracks

Thanks again. Much appreciated.

Can I ask one more mastering question before we all leave? What does UV22HR actually do? Yes, I know it dithers down, but I sometimes use it and I sometimes forget, and honestly I can’t hear any difference.
Also when I Export, Cubase offers me a choice of bit-rates. The default seems to be 32-bit float, and I usually just go with that. But I hear that YouTube prefer 24-bit. So should I be selecting that to Export? What is 32-bit float anyway? Thanks.

@Mnatseah, Dithering is most obvious and most valuable when working with poorly recorded, fixed bit-depth audio files.
For example: You have a 16 bit file where the content level hovers below -20db or lower.
When mastering that file you’ll want to increase the levels and in doing so you also increase the rounding errors which occur at very low levels. Dither will help smooth and mask those artifacts.

90% of contemporary music is jammed into the top 8 bits if mixed well. So most people simply can’t perceive the effects of dither under those conditions. With all the mathematical summing that occurs inside a DAW it almost doesn’t matter. Cubase uses floating point math internally.

Bottom line:
If you’re recording orchestral, jazz, sound design for film, or any other content that has extremely wide dynamic range then consult your mastering engineer about the possibility of delivering 32 bit float. Otherwise record and mix 24 bit.

I still routinely export 16 bit files if I know they are destined for online streaming distribution.

I worked on many big name projects in NYC during the years when digital audio and home studios came into existence. Most people would be shocked to know that some of the most iconic music of that era was recorded to 16 bit tape machines very poorly at ridiculously low levels. And yet the songs are great !

Content is king.

1 Like

When delivering you’re probably best off NOT exporting to floating point unless you know for a fact that the recipient can work with that. I’ve had cases in the past where I delivered 32-bit float by accident and it just ended up being noise in the other person’s app. So make sure they can work with whatever it is you’re sending.

PS: A fixed point file has a number of bits that represent a value. Without getting too technical the more bits in fixed point processing the further down the quantization errors are (which is the stuff that dither takes care of). You have a number of values from maximum 0dBFS all the way down to that point, which is -144dBFS roughly. So that’s the range you have with 24-bit fixed.

With floating point the best way to think about it is that same fixed bit format file but store within a wider range. And the end result of that is that when you’re processing audio within Cubase/Nuendo you get a huge dynamic range. You basically can’t distort the signal chain that Cubase provides (until the final output). It’s like over 1000dB if I remember correctly. So it’s great for avoiding clipping your signal and adding a large amount of signals together etc, but you’re not really gaining much by storing your file in that format… because it’s original converted range was probably 16- or 24-bit fixed…

1 Like

summing up
“32-bit floating-point (and other similar large word-length) formats are intended purely for the internal signal processing of a DAW during mixing, and are designed essentially to maintain the quality of individual sound elements at wildly varying levels while performing complex signal processing. Floating-point formats aren’t intended, appropriate or efficient for use as a delivery medium for consumer applications.”
source:

the point is that different DAW have different 32 bit floating formats…

export to 32float or 64 bit float, to store your mix for further work in your own DAW. Otherwise export at 24 bit for streaming or other mixing-mastering engineers that use a different DAW, or 16 (you can also export at 32 bit integer if you have a soundcard that records at that rate and you kept this level in your project).

with regard to the dithering I would use it. Even if it’s not noticeable for the 90% of final users, since it doesn’t cost nothing to put a dither plugin at the end of your mastering chain, you may want to preserve the best possible quality of your audio files, because there are also people with trained ears and good listening environments that can hear a difference.

Thanks to you all for that really helpful advice.
I’ll use dither and 24 bit.
Happy Christmas and God bless

1 Like

Hang on. One more question, for my complete satisfaction. If I insert UV22HR (24-bit) when exporting, will the file then be at 24-bit, or do I also need to set the Bit Depth to 24 in the Export Audio Mixdown options? Vice versa, if I don’t insert UV22HR, but do set the Bit Depth to 24 in Export Audio Mixdown, will it achieve the same result? If so, why are there two ways to the same goal?

I would imagine that the file you export will have a number of bits that correspond to the setting in the Export Audio Mixdown settings.

The goal of a dithering plugin isn’t to reduce the number of bits but to create dither “noise” at the location/bit depth that you have selected. So ‘no’, while the exported files will be the same bit depth they won’t be identical since that noise at the lowest part will be different.

Not all dither is the same, that’s the thing. A designer could make dither pure white noise for example, but dither can also be “shaped” so that energy is moved from some parts of the spectrum to another part. So the idea is that a good algorithm for dither literally makes things sound better than a worse algorithm. If they all did the same thing then of course you wouldn’t want a specific dithering plugin or algorithm, you’d just dither. But this has been sort of ‘a thing’ for some time.

I personally don’t really care about it, but if I did high-end acoustic music for a living I would probably care.