Hybrid mixing dither question

But this is only done one time when the audio goes through its first processing, after that the rest of the signal path remains floating point…
As far as I know the audio doesn’t degrade when increasing the bit resolution…

This thread is about leaving the audio engine and coming back in. If I have to do this via my audio interface I have no choice but allowing the conversion from 32f to 24i.
However, if I bounce audio I will for sure not want it to be converted from float to integer.
If I record through my interface I receive 24i but tell Cubase to immediately convert it to 32f (Project Settings).
If I import any audio file I will always have Cubase convert it to a new file with 32f bit resolution.

Kindly refer to my drawing from earlier.

There is another food for thought since you started to mention Bitter. There are two different kinds of bit resolutions. The first one is the datum, ie. how many bit are interpreted to be one unit. The other one is the audio signal, ie. how many bit of the datum are actually being used to represent the audio signal.
As far as I can tell Bitter shows the latter. That is ok to some extent. If I see that 32 bit are being used I can rule out that the datum uses 24 bit. However, Bitter also shows me bit usages above (well ,on its display it is below) 64 bit. How can that be? The magic of floating-point numbers? On the other side Bitter shows a 12bit signal for Bitcrusher. Now unless there is a serious bug with Bitcrusher the datum that is being used is still 32bit.
Personally I would prefer to have WaveLab’s Bitmeter as a plugin in Cubase.

Of course if we refer to the OP, I totally understand that there’s no choice but to convert the audio to integer when using external FX.
However, why would one feed the same audio file hundreds of times in a row through external FX, up to the point the distortion starts to be audible ? That’s very unrealistic.

All audio interfaces, except maybe the ones from decades ago, are 24 bit, and this gives more than enough resolution to use dozens of hardware FX without ever noticing the distortion, and for this reason there is no need to hide it with dithering.

Actually, applying dithering every time would make it worse by adding noise on noise and ultimately end up with the audio full of audible white noise.

Nevertheless, I really appreciate your additional information about the datum thing, it’s very interesting ! The Bit Meter plugin seems great, perhaps it could find its place in SuperVision, if some devs read this :slightly_smiling_face:

If we’re talking about 32 bit float and 24 bit integer then how significant can this really be? Quantization errors would be on the LSB, no? That’s down 144dB from 0dBFS, theoretically.

It is not spooky enough that I would start a crusade that everybody should use 32f instead of 24i.
The thing is: what are the drawbacks from using 32f instead of 24i? Audio file size increase of 33%. for me this is neglectible. Any other drawbacks?

LSB? When we run a 64bit OS on a 64bit CPU and talk about 24bit resp. 32bit usage? There is no LSB.

If done correctly, it shouldn’t be quantization errors but rounding errors. Which you get from floating point arithmetic all the time anyway throughout the mixing engine.

You can always get a 32 bit audio interface to feed your external effects!

You said there were significant levels of signal degradation when “switching between floating-point and integers”, and the only way to hear them is out of the speakers after the signal has been returned from float to integer.

Do integer numbers not have an LSB? Of course they do.

So where does this error occur? Where in the signal, measured as an amplitude relative to full scale, does this degradation happen?

Potato. Potato.

A quantization error is one where we 're forced to round off to the nearest available value. A rounding error I suppose could be rounding off to the wrong value. But the semantics make no difference, in the end it’s the same type of problem from a practical standpoint in that it happens somewhere in the signal chain and at some amplitude. The latter is what I’m getting at.

I’m just wondering how this is able to show up in any way where it ends up being audible to us in a normal listening environment.

1 Like

Apparently it does for some people, because they are trying to avoid it at all costs :man_shrugging: :smiling_face_with_tear:

If we push the thinking a bit further, the simple act of adding an EQ and tweaking one single band +/- 0.1 dB will alter the audio much more than converting back and forth a thousand times in a row… and do you hear a change of 0.1 dB ? I bet some people will say they do, but it’s physiologically almost impossible.

People are trying hard to avoid conversion and rounding errors but will push their audio through dozens of effects that drastically alter the sound. Where is the logic ?

2 Likes

I think the logic is to avoid anything “digital” artefacts. There are many different types and people does not understand what they do to your audio quality, so to be sure avoid all you know. And from an audio engineering point of view that is of course correct, but it might not always be a good way to spend time and money. But the topic is about getting a good practice.