Hybrid mixing dither question

I am starting to get into hybrid mixing.

I’m using a Steinberg UR824 (24 bit) interface and Cubase using an effects loop to go through a Warm Audio 1176 and WA2A (LA2A clone).

I’m working in Cubase at 32 bit float so I presume I should dither before the output to the effects loop? Is there any way around this problem. I understand that if I set the project bit depth to 24 bit, Cubase is still processing those files internally at 32 bit float - it’s just the sound files are recorded at 24 bit depth. If that’s the case then presumably, I still should be dithering each time I use the external effects loop. Is that correct?
Many thanks

No. The only time you need to dither is when you render audio to a lower bit depth.
The confusion here lies in the difference between the project bit depth vs Cubase’s internal audio engine.
This can be turned into a massive topic, so I’ll just keep it real simple.
The project bit depth (and sample rate) is what your audio interface uses as well as recorded audio, virtual instruments and plugins.
Internally Cubase’s mix engine can handle audio in a much higher bit depth. This provides benefits in mixing and processing audio but is transparent to any other components (such as plugins and your interface). I.e. your audio interface’s AD/DA converters are still converting 24 bit audio regardless of the bit depth of Cubase’s internal mix engine.

Sorry, that is actually not fully correct. The project bit depth affects the audio files that are recorded and generated (in the case of e.g. render in place, freeze, DOP). It does not affect instruments and plugins, those operate either in 32bit float or 64bit float (depending on the mix engine settings and whether they support 64bit processing).

The interesting question for me is whether converting from 32bit float to 24 int (which is what happens when you send data to your audio interface) actually is a reduction in bit depth. I think I once read that 32f and 24i have the same precision as the mantissa of the float is 24bit in size. That would mean that there is no truncation happening when converting and no dithering would be needed, but I am definitely not knowledgeable enough to really understand the intricacies of DSP programming and whatnot :wink:
The other question is whether anyone actually could hear the difference between dithered and non-dithered 24bit… but I won’t go there ::upside_down_face:

To the OP: in case of doubt, test it with and without dithering and see whether you can hear a difference. My hunch is that any millimeter of knob movement on your hardware will have a bigger impact on the sound.

Dithering is only useful when intentionally lowering the bit depth of an audio file, for example when converting your mastered file from 96 kHz/24 bit to 44,1 kHz/16 bit.
If you do that without dithering, all the samples will abruptly snap to the closest bit value.
That will effectively alter the audio and change the waveform shape, introducing unwanted distortion/artifacts.

There’s actually no need to dither when going 32 to 24. The resolution is already high enough at 24 and the distortion will be exceedingly low and completely negligible.

However it is highly recommended when going 24 to 16, because 24 bits has 16,777,216 possible values, and 16 bits only has 65,536 possible values, which is 255 times less.
It means that when going 24 to 16 without dither, the distortion would be 255 times higher than when going 32 to 24 without dither.

Cubase (or plugins) internal resolution (32 or 64 bit float) only serves for internal calculations when the audio is processed. It can be anything, gain faders, panners, effects, etc.
Let’s say your recording is in 16 bit, and Cubase is in 64 bit float.
The 16 bit audio goes through an EQ.
When you tweak the EQ bands, all the internal mathematics will happen on the 64 bit range, and the resulting values will be rounded to the closest 16 bit values.
This causes no distortion at all, and needs no dithering, because it is only related to the internal processing, not to the actual audio resolution.

It is a total myth thinking that plugins will increase the actual bit depth on input, then lower it back to original on output.
Audio Processing and audio Conversion are two completely different things.


To answer the OP original question :
No need to dither when sending the audio through external FX.
If your interface is set to 24 bit and your audio has also been recorded in 24 bit, then it will always stay in 24 bit.

2 Likes

Basically, if you convert at 24 bits or above you should be ok.

Many thanks for your reply. That’s very helpful.

Many thanks for your reply. I’m most grateful to you.

Many thanks for your reply. That’s clearly answers my question and is most reassurring.

Many thanks. Cheers.

“When you tweak the EQ bands, all the internal mathematics will happen on the 64 bit range, and the resulting values will be rounded to the closest 16 bit values.”
“It is a total myth thinking that plugins will increase the actual bit depth on input, then lower it back to original on output.”

No. The math will happen in the 64 bit range, but the result will not be truncated to lower bit depth. The result will be outpus in 32 or 64 bit float format, depending on your Cubase setting. This is true for any audio processor, even if it is only a gain knob which is not at the neutral position.

1 Like

Well, if the plugin processes in 64bit float, but Cubase’s mix engine is set 32bit, strictly speaking there will of course be truncation and you will inevitably lose precision (whether that is relevant is of course another question, maybe one for the golden ears :wink: ).
But of course the important part, as you mentioned, is that through the whole mix engine the audio stays in 32 or 64 bit float, and any conversion to 16 or 24bit integer only happens on export, where you should dither, if only because it doesn’t hurt.

1 Like

Thank you all for these very helpful replies.

All the best

Reub

I made a little chart to assist in understanding better where and when bit depth changes occur.
Below that I will link a good YT video about dithering.

  • 32f = 32 bit float / 64f = 64 bit float
  • AFAIK plugins will not change the bit depth. They will work with whatever is being fed to them by the host.
  • I am not sure whether Cubendo always converts the audio stream to 32f/64f or if the conversion only happens as soon as any VST parameter in the audio stream is “added” (e.g. volume change).
  • Personally I run Cubase on 32f and I also record everything on 32f. This way I keep conversion to a minimum.

Here is the video link:

That might not necessarily be true. Some plugins work internally with 64bit (sometimes even 80), especially if they are using IIR filters, because with recursive filters rounding errors can accumulate (I wouldn’t count on me actually being able to hear those rounding errors, though…:wink: )
I am also pretty sure that some plugins that emulate older digital devices (think reverbs) work internally with integer instead of floats.

As soon as you modify the audio in any way, be it volume changes (fader, event volume, pre-gain), plugins, or summing (ie going through the master channel), the audio gets converted to float. It might be possible that if you route the audio immediately through an external FX on the channel that it stays at the original bit depth for the time being.

You can of course do that, but there is no real benefit in recording at 32f if your AD gives you 24i. You’re just moving the process of conversion from float to int to the “writing of the file” part, not the “summing engine” part.

I think I expressed myself not clearly. The plugin is like a black box for the host. If the host transmits 32f it expects to receive 32f back from the plugin. If the host sends 64f the plugin will return 64f. What the plugin does internally is a totally different question and is not under the control of Cubase.

It surely used to be that way. But since I could imagine an advantage to convert any audio stream from the driver or a file directly to 32f/64f even before any VST parameter chnages the stream I put it up there with question marks.

My DA is only for monitoring. Anytime I render in place I get the exact same resolution that the Cubase engine uses = there is no conversion when rendering and no conversion when the file comes back into the engine. I save two conversions on every bounce. I never loose any information.
On the downside I use 33% more disc space. Disc space is cheap, so this is a no brainer for me.
Basically I convert two times: On recording audio and at the very end for any mixdowns I create. If I work fully in the box (ie. no audio recordings) I only do one bit depth conversion: final mixdown.

Edit - Addendum:

Yes, I try to move the conversion from realtime to offline. A little bit less to do for the audio engine while playing back.

Using the Bitter plugin will help you find answers to your questions :slight_smile:
Put one pre fader and one post fader on one track, and other instances in various places in the signal path, and see how it behaves. Even though Cubase is set to 64 bit, some plugins will only use the 32 bit range, and if the audio happened to be 64 bit already before the plugin, then the plugin will output in 32 bit until something else converts it to 64 bit again, so in reality you still have conversion happening everywhere. :laughing:
And not to mention that the plugins actually use whatever bit depth they want to use (for example Kirchhoff-EQ can be set to 117 bit), regardless of the Cubase setting, so you may have 10 conversions in a chain of 5 plugins… The Cubase setting is only for Cubase’s own processing such as volume, panning, and all various editing like transpose, audiowarp etc, but not for insert effects.

What does that mean “no audio recordings” ? Importing audio files or using virtual instruments ?
If the library is using 44.1/16 samples and your project is 96/24, it has to convert both the sample rate and bit depth at the time the instrument outputs the audio… So whether this is done directly in the VST (before outputting the audio) or in Cubase (after the audio has been outputted) is definitely the same thing : It is already converted one time.

Just make music instead of ruminating about this :wink:

I simply set the processing precision to 64 bit and record at 24 bit. 24 bit gives more than enough dynamic range.

It is beneficial though to work at 96kHz in my experience, but that is a whole different discussion.

I have done that. Strangely the result is as per my above chart. The only plugin I could find that reduces the bit depth was Cubase’s Bitcrusher. And that only on particular settings, e.g. the intial setting.

Using only virtual instruments.
But even if I use audio files… upon import I can have them converted to 32f (non-realtime) which means a bunch of zeros are added. From there I am golden.

I admire your fine sense of humor.

I set Cubase to 64f, audio file is 24.
I insert Bitter Post-Fader.
With vanilla signal path, Bitter shows 24 bit.
If I tweak the volume fader, it shows 64.
If I insert a Waves plugin (or wherever plugin that outputs 32f ) before Bitter, but still Post-Fader, it will show 32.
If I instead put the 32f plugin Pre-Fader, and surround it with two Bitter, I see that the first one shows 24, the second 32, and the third 64.
If I put another 32f plugin on the output channel, followed by Bitter, it will once again show 32.

This experiment can only be executed properly when Cubase is set to 64f. If you set it to 32f you will not be able to see the difference.

If I remember correctly, most of Cubase stock plugins can output 64f when Cubase is set to 64f, but, the majority of 3rd-party plugins will only output 32f, even though their internal processing can be done at a higher resolution, and for this reason, it is not possible to know the plugin’s operating bit depth, unless it is stated in its documentation. A lot of modern plugins are indeed working in 64f, but only output in 32f. You simply cannot avoid conversion.

It’s like comparing a star that is 30 trillion km away, with one that is only 25 billion km away, but they just look as bright, and there’s no way to distinguish the difference in distance…
Just, how do humans even know about the stars distance ? No they don’t, those are just axioms…

  1. I am smartly avoiding the trouble with 3rd party plugins by running everything on 32f. There are reasons why I personally don’t use 64f.
  2. Plugins can do all kinds of shenanigans between their input and their output. I do not have any control over that. I just go by sound and what some analyzing tools show me. If I don’t like what I hear/see, the plugin is out.
  3. As long as I remain with floating-point numbers I am good. All I really try to avoid is switching between floating-point and integers more than absolutely necessary as this is the point where degradation of the signal can happen on a significant level.

Which brings us back to the intial question: Should dithering and noise shaping be inserted on a signal path that will lead to an external effect.
I’d say use it when you send very quiet signals (-60dB and less). On everything louder you won’t probably hear the difference. But most importantly: go by ears.