Why use 24bit?

so if CD quality is 16bit why would we want to ever use 24bit? what benefit is there? and wouldn’t you end up dithering twice? or how does it work?

It’s best to start with a higher quality to begin with even if later dropping down to 16bit. I’m no expert in this area by any means, but I do recall reading about reduced noise floor as well as a result of the greater dynamic range. I may be off there though. Another thing I’ve read is that the higher bitrate is favorable when processing audio - effects, EQ, etc.

You wouldn’t dither twice, you never should. You only dither for the final master file. I usually do this when maximizing the volume using Voxengo’s Elephant. 24bit is a good format as it’s excellent quality and the file size is 1.5x 16bit wav. 32bit wav files are twice as large as 16bit wav’s. However, 32bit floating point allow you to go over 0db in your mixer channels without clipping since it has increased headroom.

Again, I’m by no means very knowledgeable on this so maybe someone else can better explain and correct me if I’m wrong here.


Rev.

I researched and found the answer.

If you’re sending your mix to a mastering house, or something like that, then you want to dither to 24bit so that they have higher quality to work with. They would again dither when mixing to 16bit.

But since I’m doing it all myself I just dither to 16bit.

Rev2010 is correct in saying you, or the mastering engineer, should dither just one time at the end of all processing. Whether your project is at 24 or 32bit leave it at that until the file is ready to be burned to a disc.

Dithering introduces a particular kind of noise to ‘smooth’ out the damage done by digital processing, level changes, all sorts of FX, EQ e.t.c.

Mauri.

I’m pretty sure if you save to 24bit you still need to dither b/c of the processing you do.

But that’s an interesting point if you save to 32-bit float is there no truncation/rounding? I don’t see how that’s possible mathematically but maybe I’m not getting something.

Dear Itno, you’re just not getting it, conventional wisdom is that you should only dither one time, after all processing is done.


Mauri.

Dithering is used normally when there is truncation to 16 bit. Side effects of truncation at 24bit are basically not audible to the ear. In theory dithering is always better than truncation. Practically, everybody uses it only once at last stage when going 16bit for cd format.
The advantage of using 24bit in a DAW like Cubase or Nuendo has also to do with the fact that even if your files are recorded at 24bit all the calculations are done at 32float. While you work, you keep going between 24 to 32float back to 24 (like for example every time you bounce a track or apply a plugin to a clip). The advantage of 24bit is in the fact that this truncation is not audible. If you used 16bit files the DAW should apply dithering every time, ending up with a cumulative dither noise. If I am correct, I don’t think Cubase/Nuendo applies any dithering when doing internal calculations.
This is also why if you receive from someone files at 16bit to work on you should first convert them to 24 (or 32float if you prefer).

The best argument for using 24-bit over 16-bit is the resolution available.
16-bit has 65,536 possible digital equivalents to analogue voltage.
24-bit has 16,777,216 possible values over the same range.

32-bit floating point is actually 24-bit processing with an 8-bit mantissa, so dithering is not necessary.

A big advantage of recording in 24 bit is you can have far more relaxed recording levels while knowing you have enough resolution left to capture the signal accurately.

You’re not risking clipping and you’re not putting (especially cheaper) analogue components under as much strain.

AFAIK