do you dither your 32 bit files before conversion to 24 bit?

That’s what he says, but doesn’t that seem at odds with what so many other people say - that the truncation noise at 16 bits is so noticeable, until dithered away?

I won’t chime into dither vs. no dither. We all ‘know’ and/or have experienced that for reducing the wordlength from any depth to 16 bit dithering is not a bad idea :laughing:

To get back to the distinct 32 to 16 bit point:
as much as I understand the 32 bit floating point format (and we’re not talking about 32 bit fixed point in Cubase), it actually is a 24 bit audiostream. The 8 extra bits are used to scale the 24 ‘real’ bits to their optimal technical sweet spot. That’s how the 1500+ db headroom of a 32 bit float file becomes possible.

Following those beliefs it’s not neccessary to dither 32 bit fp down to 24 bit as the audible part of it is 24 bit anyway.

That’s very simplified of course. Doing some null tests with RME Digicheck as measuring tool, you can see what residue is left in what case:


File perfectly nulling out to phase flipped copy, UFL meaning ‘underflow’, nothing left to be measured…

File practically nulling out to 24 bit copy (made by bouncing the file with project settings set to 24 bit .wav), left side nulls totally, probably due to DC offset, right side value = -141,5 - nothing you can experience with your ears I guess :sunglasses:

Both sides read -130,9 db RMS/- 126,4 db PEAK, that’s the noise UV22 has added - still below the possible dynamic range of my Fireface 800 (specified at 119 dbA - http://www.rme-audio.de/en_products_fireface_800.php)

So in my world 32 bit fp to 24 bit dithering is nothing to waste a thought about. Not even with projects of 50+ tracks where that noise or quantization errors could add up. My tracks usually contain audio that mask the -126,4 db of whatever perfectly :smiley:

It’s a great video, and I agree with that remark, but only because ‘ruined’ is too strong. It depends very much on the type of music, but certainly you can hear the difference on good equipment, and ofcourse only in the really quiet parts. Think classical music or a long sustained acoustic instrument resonating to silence. Will it be a problem in the real world, where most people listen to lousy 128 kbps MP3s on substandard hearing equipment in noisy environments? Not really, but why not use the tools we have?

It’s not at odds, when you consider this: The aliasing artefacts (created by the truncation) are themselves very very quiet (around -90dB), so you can only hear them when the music is also very very quiet (eg at the very end of a fade out).
So yes the aliasing IS very noticeable - provided that the music is very very quiet (in the lower bit range).

For “normal” program levels, the artefacts are still present but they still remain at that same very quiet level, so they’re masked by the rest of the music.

In 24 bit files, the quantization noise due to truncation is even quieter (about -144dB if I recall correctly)