Cubase Processing Precision, 32 Bit or 64 Bit?

Thank you very much KHS,

I understand the difference between the Bit Depth of the Project Setup, the audio format and the internal Processing Precision.

Like you say, if it’s hard to notice a difference in CPU power usage, why not go 64 Bit? My computer is powerful enough to handle the insignificant extra.

Thanks again!

Ah - it’s complicated and I can’t say I fully understand it. 32 bit integer has a SNR of 193dB, but anything recorded in 32 bit float actually can’t have a SNR greater than 144dB because the 24 bit mantissa is used to store the signal. The other 8 bits provide a scaling factor. So, although the dynamic range is enormous, the SNR isn’t anyway near as good.

Yes you are correct, I meant the dynamic range is above 1500db.

So that means the signal is still transmitted to the interface at 32/64 bit float, then the interface converts it in real time to the selected bit depth. I always thought the daw would output the audio at the working bit depth directly, but such information isn’t documented anywhere.

Your interface converters are working at the bit depth you select in the driver software, by standard it’s 24bit. Your interface converters doesn’t do float. If your project bit depth is different from what you select in the driver software, Cubase (or the driver, not sure) will convert it before hitting the converters.

But I’m talking about the internal precision, 32 or 64 bit float, not the project bit depth.
You told that the first thing Cubase does when playing audio is to convert it to the selected 32 or 64 bit float, then it remains like that for the whole signal chain.

I am asking if the signal remains at 32 or 64 bit float up to the interface, or if it is converted back to the selected project bit depth before going to the interface.

I am asking because I made that post and I want to have additional details because I got told by a user that it was working like this.

It is converted to the bit depth you set in the driver software and I’m pretty sure it’s the driver that converts it. If your driver software doesn’t allowing you to change it then assume that your interface is always working at 24bit and then whatever you throw at it is converted to 24bit.
The internal precision is converted to the project bit depth at the output.

Alright, many thanks ! That’s what I wanted to know.

There’s a big advantage in using 32 bit float rather than 32 bit integer. The signal can go beyond 0 dBFS and there is no risk of clipping. This data is stored in the 32 bit float file, and means that you can always recover the data beyond 0 dBFS by simply lowering the gain. The audio will never distort.

I don’t know if 32 bit float SNR is -144 dB. Another website says it goes much lower than this.
I can’t find a definitive answer.
I have found this website

32 bit float is, quality wise, the same as 24 bit integer. The added 8 bit is used as the exponent which is what makes the dynamic range super high but the SNR is still -144 dB, the same as 24 bit integer.

Well, to be consistent, if my internal processing precision is set to 64 bit float and if the storage problem doesn’t matter anymore, wouldn’t it make more sense to set the Record File Format to 64 bit float?

And for more precision and audio quality, what would be the best ratio to establish concerning the sample rate for a 32 or 64 format despite the fact that in the end, all this audio will be converted to 24 bits or 16 bits depending on the needs?

Many plugins implement both 32-bit and 64-bit code paths, because the VST plugin API allows you to do one or both.

This might be true for 32-bit fixed-point values, as used by DSP chips from the distant past, but this is not true for 32-bit floating point.
There are also other problems than signal-to-noise in plugin processing. Read below.

So much mistaken math in this thread.

First: a “32-bit” floating point number has 24 bits worth of precision. (It has a 23-bit mantissa, but gets one extra bit “for free” because of the specifics of the encoding.)
24 bits of precision, buts the quantization noise floor at about -146 dB.
Using 32-bit floating point precision in Cubase (or any other system) will give you quantization noise that’s at -146 dB, and you’ll probably want to add dither to get it to -143 dB but better sounding.

Second: -146 dB quantization noise is still quite imperceptible, because you need to be playing at full scale to get to that point. If your signal is less than full scale, the quantization noise will also lower itself – this is what the “floating point” is all about. I measured my near-field Genelecs to about 112 dB SPL before distorting out (if I remember correctly – this was long ago – I don’t run them that hot, ever, because I like my ears) so the quantization noise would be below 0 dB even at max – and ears mask signals, so you can’t even hear a +20 dB signal when the room is blasting at 110 dB, much less a -36 dB signal, which you can never hear, almost definitionally.

A 64-bit floating-point number, which many VST plugins can process with these days, gives you 52+1 bits of precision, so a little over 320 dB of headroom above quantization noise at full scale. That’s ludicrous overkill, if all you’re thinking about is quantization noise.

However, in signal processing, something called “recursive filters” (“IIR filters”) are very, very common. Almost all EQ, anti-aliasing, and even compressor responses are implemented using IIR. Recursive filters, especially for frequencies that are low compared to sampling frequency, and up accumulating error in an exponential manner.

Thus, it’s actually very very hard to design an EQ/filter that is stable at 20 Hz (or even 100 Hz) when using 32-bit floating point, but it’s quite doable at 64-bit floating point. Back when VST only did 32-bit, many plugins would internally convert to 64 bit just to be able to run stable filters, and then convert back.

Constant conversion back and forth may eventually degrade the signal, in theory, but in practice you’d need so many plugins that you couldn’t plug in enough screens just to fit the name of all of them for that to matter. But there’s some CPU load involved in the conversion, and the built-in Cubase filters/EQ will certainly work better in 64-bit than in 32-bit.

So, if you ever have a plugin that suddenly goes silent or clicky when you crank frequency low or feedback high, it might be that you’re running into the numerical stability limits of 32-bit floating point values, and if the plugin also supports 64-bit processing, changing your project to 64-bit may fix it.

Btw: What’s often done in practice, is add a small noise signal around -120 dB, so 20 dB above the quantization noise floor, to prevent the plugin from reaching the denormal/numerically unstable state. This noise signal could theoretically be amplified by downstream plugins, too, so it may be that your signal chain becomes quieter when you switch to 64-bit processing, where the same kind of signal only needs to be at -300 dB to serve.

Not really, as the conversion from the stored 32bit float only needs to be done once and not with every plugin.
Also, using the industry standard 24 bit integer as a the starting point, 32 bit float will use 33% more storage space while moving to 64 bit float as project setting you will use 100% more storage space on top of that for no reason, as 32 bit float is already enough to avoid clipping.

As i wrote in my next post, I corrected myself. SNR for 32 bit float is about the same 24 bit

This is not entirely true. How hot you run your monitors is just a matter of how much you turn your volume knob and thus how much amplification you are doing. There is a difference between the dBFS used in the digital world and the SPL you get from your monitors.
To get the full benefits of the 144dB SNR from 24 bit, in theory all you need is to make sure the signal is hitting 0dBFS before AD conversion. Now this is only in theory as there are other factors that comes into the equation and practically even the best converters will not give you more than 120’ish dB SNR.

Thanks, that makes good sense to me.

So what you are saying is that the signal remains at its original bit depth after exiting a plugin ?

On another Topic I got told :
(expand the quote so see my original statement)

Here @KHS seems to be on the same opinion that the signal is converted to the internal 32 or 64 bit FP for the whole signal chain.

What is the definitive answer ?
Is the signal actually converted to Cubase internal bit depth in between plugins and processing, or is the internal bit depth only used during mathematical operations and the actual signal always stays to its original bit depth when the processing has been done ?

Thank you for the link.

Very clear as an explanation.

64 bit float, why not use it…

The definitive answer is: The audio stream is converted to 32 or 64 bit float format as soon as it enters the signal chain. And it stays there for the whole signal chain.

In the [Studio] > [Studio Setup] > [Audio System] > [Advanced Options] panel you can choose between 32 bit or 64 bit float processing. This setting affects the entire signal chain.

Let us assume that you have selected 32 bit float processing, and you have imported a 16 bit WAV file into a Cubase track.

Now put a bit meter into an insert slot. For this test I recommend the free “Bitter” by Stillwell Audio. When you play the file, Bitter will show that only 16 bits are used in the audio stream. In reality, however, the audio stream is already in the 32 bit float audio format.

A 32 floating point number consists of a 24 bit significand and an 8 bit exponent. (The significand is sometimes called mantissa, but this is not correct.)

Because we have not applied any processing to the audio stream so far, the bit meter shows that only 16 bits of the fp number are used. But note that this is not a stream of 16 bit integers; it is a stream of 32 bit floats in which only 16 bits of the significand are nonzero. These are just the bits that came from the integer numbers. The remaining bits of the significand are zero. The bits of the exponent are also zero. This results in a neutral multiplier because 2^0 = 1.

Now let us apply some processing. We can simply use the gain fader for this. Put another instance of Bitter into the track, but this time post fader.

When the fader is in neutral position (0 dB, meaning gain = 1.0), there is still no change. Only the original 16 bits of the significand are used. Now move the fader slighty. The gain value changes (e.g. to 0.9948 or 1.7493), and now Bitter shows that all 32 bit of the fp number must be used for the audio stream.

If you change your processing precision to 64 bits you will get similar results: As soon as you move the fader away from its neutral position, the sparse fp audio stream will change from 16 used bits to 64 used bits. But the audio stream is always in floating point format.

As long as only 16 bits of the significand are used and the exponent is zero, the fp audio stream is fully equivalent to a 16 bit integer data stream. This means it can be truncated to 16 bit integers without loss of precision, i.e. without the need for dithering.

As soon as processing is applied, all bits of the fp number are used. You cannot truncate to 16 or 24 bits without loss of data. The bits that represent very low signal levels will be lost.

In order to minimize the damage caused by loss of precision, you can apply dither noise before truncating. Dither noise prevents unpleasant truncation distortion. On the other hand it will raise the noise level a bit, but this trade is beneficial.

Dithering is important when you go to from fp to 16 bit integer format or even lower. In case of a 24 bit integer target format the difference will most likely be inaudible. I would still recommend dithering because it is so simple to do. But don’t worry if you have forgotten to dither in this case.

1 Like