Practical difference between 16 bit and 24 bit

Hi! Can someone illustrate in Cubase one difference between 16 bit and 24 bit audio? Yeah, the noise floor is lower in 24 bit audio (the dynamic range is higher), but this is all theory! What does this mean practically?

Practically ? You have audio files 50% bigger on your drives… :grin:

1 Like

Ha ha! Thanks for the joke.

This is just more theory… I asked for apples and you’ve given me oranges.

Rude – you should do your own research and read things thouroughly instead of demanding things on a silver platter like some spoiled brat. Goodbye. I will avoid you if I see you here again.

1 Like

More seriously, I remember having read (unable to remember where…) that the 24 bits format allows more editing passes on a given audio file, as each audio process/rendering doesn’t increase the noise floor as much as a 16 bits do.

This said, and with my ageing ears, I’m completely unable to confirm that. I’m not even able to make the distinction between a 16 bits .wav file from its 256 kbps MP3 version, so… :neutral_face:

1 Like

Sorry. I know I should be doing my own research, but all I can find is theoretical information and nothing related to Cubase.

Interesting. So as long as I don’t use the Direct Offline Processing window, I’m good with 16bit?
More to the point, I’m concerned about converting the following mp3 to wav and don’t know whether leaving it at 32 bit would make a difference.

2 Likes

You are obviously over-reacting. No insult from @alin89c, AFAICS, and I think that the question could be (a hint from my first post… :smile:) : is 24 bits format audio files use worth the 50% added storage strain ?

I think it is, but the debate is not closed, even in august 2024, and I have seen countless threads in different places on the subject…

1 Like

I think you are, but take my answer with a grain of salt. As far as I remember it’s the accumulated edit/rendering process passes that could affect, in a hearable way, 16 bits audio files. Thinking of it, I even think that there is a paragraph in Cubase operation manual on the subject : let me retrieve it, if the case…

1 Like

One thing I’d say is very important is that you can back off the levels without worrying too much about the noise floor becoming a problem at a later stage (and of course less risk of hitting 0dBFS).
A practical example: an artist sent over 16 bit files for mixing, some of them were very quiet and upon boosting them to the required level they became unusable. Luckily he could go back into the mix and fix it, plus export 24 bit files.

If it’s the final delivery stage (a mastered file), I guess you could argue (but don’t, you’ll be flamed!) that you don’t need more than 16 bits to represent the material. Still, I wouldn’t go there…

4 Likes

Yep, I also forgot this aspect, I admit, which is a true concern…

To answer this we have to realize that there are three different stages in regards to bit resolution:

  1. Recording, ie. converting a sound from analog to digital
  2. Using digital audio inside Cubase
  3. Outputting the sound, ie. converting from digital to analog

For Recording:
The electric, analog parts that are being used to record and convert a sound create some signal on their own - always. The sum of all those signals is called the noise floor. You want your good signal to stay away as far as possible from the noise floor.
However, there is also a ceiling for digital audio. We refer to it as 0.0dBFS. Your signal must not ever touch that ceiling.
So you try to have the level of of your signal keep a maximum distance from the noise floor without ever touching the ceiling.
With 16bit this range can be a theoretical maximum of 96dB, while with 24bit it can be 144dB. In a good controlled environment 24bit gives you a bigger range between the “don’t go near that” floor and the “do not touch” ceiling.

Using digital audio in Cubase:
There is absolutely nothing to worry about and nothing to discuss. Your DAW takes care of everything, good for you.

Outputting digital audio to the analog domain:
In most cases your DAW will deliver the audio signal to the driver in a floating point format, ie. either 32bit float or 64bit float. It has to be converted to a integer format like 24bit or 16bit before it can be handed over to the DAC (digital analog converter). It sounds a bit nicer to use a 24bit format here as less information will be lost during the conversion process. However, it is debatable how many people would actually hear a difference between 24bit and 16bit.
But since you are the only person in the world to ever hear this particular signal, why not treat yourself and work with 24bit?

Special case - Creating digital audio files wihin your DAW:
This happens if you render a file / bouce selection / export mixdown.
I follow a simple rule of thumb here: If the resulting file is not the final product that is supposed to be delivered directly to the consumers, use a floating point format like 32bit float. The practical advantage is that you can change the volume of the samples/of the file at any given time without losing any audible information. When using floating point formats you have eliminated the 0.0dBFS ceiling (or actually extended it to +767dBFS, good luck ever reaching that), while the noise floor is not existant either (it lies at -768dBFS) as no analog gear was used.

9 Likes

Then insert a tape simulator to bring back the noise. :upside_down_face:

1 Like

I find this video very informative:

with a short detour to bitrates.

Greetings

Whatever about the maths - in the end 24bit sounds more open particularly when you add 20 - 40 audio signals together. I worked with 16bit files in the 90s and early 2000s and there was a marked difference when we moved to 24bit digital audio. Sometimes I get something in that someone has rendered out WAVs in 16bit and I can sense it. The mix closes in in terms of space and everything gets more pinchy. Particularly after all the plugin processing on that lower bitrate audio.

I managed to find something quite interesting.

Look at this graph I found in a YouTube video (here). I think that one good to know difference is that when recording at 16-bit, frequencies below -96 dB don’t get picked-up. This being said, if you’re not recording whispers or very soft sounds (which can be compared to “low light” material in photography), higher resolution is not worth the “storage strain” (not to mention that noise generated by the equipment gets picked up at the same level no matter the resolution used when recording). Or am I wrong?

Proof (project at 16 bit)

So, here I disconnected my condenser mic and left only the cable connected to my Clarett 4Pre USB interface. I lowered the preamp gain to 0 (on my USB interface, that is) and boosted the track signal by 48 dB (in the MixConsole). In the Frequency 2 plug-in, I can see clearly that there’s a signal going in. Ok. Now let’s see what being recorded. Nada! Nothing, that is. This is proof that a higher resolution allows you to record quieter sounds/noises (provided that your “analog parts” — as @Johnny_Moneto said — don’t generate too much noise; if they do, what’s the point of recording at a higher bit resolution?).
In conclusion, I think that recording at a higher resolution than 16-bit when one doesn’t have the proper equipment is like trying to overcome the lack of light by increasing the ISO value on a small sensor camera…

proof there's a signal 3

1 Like

Not sure what your goal is here.

If you feel 16 bit recordings are good enough for you, then use that. It’s your choice. I assume your original question has been answered, has it not?

It’s not a theory, 24 bit does in fact have more capable dynamics by far…I stopped using 16 bit after switching to 24 bit then going back to 16 and finding it was much harder to mix multitrack projects comparatively. There’s no reason you need to switch if you don’t want to though, in fact if you’re just recording a stereo mix and not doing anything to it afterwards it makes no difference then.

1 Like

Practically speaking though that is just considering half of the limits of recording with the other half being the maximum allowed level before clipping.

Having 24-bits worth of dynamic range allows you to lower the level comfortably without worrying about that noise floor just in case the level could get hot enough to clip during recording.

It’s far from uncommon to set level with a musician just rehearsing a section and they’re way softer than they will be when they are “feeling it” during an actual take. That’s why there’s the risk that we set the level too high and they then clip during recording. When we turn it down that’s when we get problems if the system itself generates noise (quantization distortion) due to being only 16 bits deep.

In other words ‘yes’, it’s true that signals staying away from soft sounds that are close to the noise floor of the space you’re in will always have that problem, but imagine those low levels with low amplification and only 16 bits, turned down simply because you need to stay away from 0dBFS. Those low levels are now at risk of bringing with them quantization (or dither) noise as you boost levels during the mix process. With 24 bits you avoid that.

3 Likes

Also, storage is cheap.

This is weird as all files will get converted to floating point format once you just change any paramter on the channel, like volume level. If the source file sounded ok then mixing will be the same whether you use 16bit or 24bit files.
See my comment on the “output to analog domain” above, though.

Just for info:
Green is a 16bit file,
red is a 24bit file