This setting will NOT change how plugins process audio. Your plugins will process audio in the format they are coded to process in.
Changing to 64bit processing will only affect Cubase processing and that is the summing.
Theoretically it is more precise but you probably wont hear a difference.
It will take a tiny bit more CPU. However, as most plugins are already coded to do internal processing in 64bit, most people will actually have a lower CPU usage with 64bit because they will save the conversion from 32bit to 64bit and back again that happens every time audio passes through a plugin.
The difference the word length makes is to the level of quantisation noise. With 32 bits of resolution in the samples, the signal to noise ratio is 193dB. It would be hard to generate signals at such a low level, and then add massive amounts of gain, to create a problem with 32 bit in normal operation. You’d have to work really hard at it!
So noise is unlikely to be the reason for introducing 64 bit. It may be, as KHS says, that many plugins work at 64 bit internally, and the idea is to minimise conversions.
How does that work exactly ?
Is the 32 or 64 bit processing only for the internal mathematics, or is the audio actually converted to 32 or 64 bit for the whole signal chain ?
For example, if the project is set to 16 bit, is the audio converted to the internal 32 or 64 bit only for the processing time then gets truncated back to 16 bit , or does it stay in 32 or 64 bit for the whole chain until you render or export ?
You forget that it’s not 32 bit fixed, but 32 bit float so the sn ratio is actually more than 1500db with 32 bit float.
Besides that, the 64 bit double precision engine is more of a marketing gimmick as other DAWs already had that for a long time so Steinberg probably felt they had to add it to Cubase.
If the project is set to 16 bit, will only affect how the files are stored on your drive. As soon as the audio is being played from the file, it is converted to either 32 or 64 bit FP.
Thanks for the answer, so the bit depth you set in the project settings will only apply when you record new files ?
For example creating a dummy input and recording one track to another in real time, the resulting file will be in the same bit depth as the one set in the project ?
Then how about when the project is set to 16 bit, the interface will also be working in 16 bit, but when the 32 or 64 bit audio reaches the converters and is truncated to 16 bit, why is dither not necessary ?
Well, yes if you set your project to 16 bit you would need to apply dithering. But no one in their right mind would set their project to 16 bit. Most will set it to 24 bit to get the max dynamic range out of their converters and then a few lazy folks with bad gain staging habits like myself, would set the project to 32 bit float to avoid any chances of clipping printed to a file.
In both 24 bit and 32 bit float, you would only need dithering if you are exporting to 16 bit format.
If I understand your answer, the best choice for me would be 32 bit float processing and always record and render audio with this same format (disk space is no longer an issue today) and i apply dithering when I export in 16-bit format.
I mention 32 bit float processing because I noticed while using the ‘‘Show plug-ins that support 64 bit-float processing’’ function of VST Plug-in Manger that a fair amount of 3rd party free plug-in that I have such those from Kleinhelm, TDR and a few others had disappeared from the list, so not all plugin companies support this 64 bit float format.
So 32 bit float offering excellent resolution quality and limiting 32-64-32 conversion is the best choice to make, isn’t it?
Well first of, don’t mix up the Cubase engine processing settings with the project settings as that is not the same thing. For your project settings I would use either 24 bit or 32 bit FP depending how careful you are with gain staging (But as you said yourself, disk space is no issue today)
As for the internal processing, well, while there certainly are plugins not supporting 64bit, I did a test myself on a near finished project. I measured the CPU usage with both 32bit and 64bit and didn’t really spot any noticeable difference. So for me, I just set Cubase to run at 64 bit because why not?
I understand the difference between the Bit Depth of the Project Setup, the audio format and the internal Processing Precision.
Like you say, if it’s hard to notice a difference in CPU power usage, why not go 64 Bit? My computer is powerful enough to handle the insignificant extra.
Ah - it’s complicated and I can’t say I fully understand it. 32 bit integer has a SNR of 193dB, but anything recorded in 32 bit float actually can’t have a SNR greater than 144dB because the 24 bit mantissa is used to store the signal. The other 8 bits provide a scaling factor. So, although the dynamic range is enormous, the SNR isn’t anyway near as good.
So that means the signal is still transmitted to the interface at 32/64 bit float, then the interface converts it in real time to the selected bit depth. I always thought the daw would output the audio at the working bit depth directly, but such information isn’t documented anywhere.
Your interface converters are working at the bit depth you select in the driver software, by standard it’s 24bit. Your interface converters doesn’t do float. If your project bit depth is different from what you select in the driver software, Cubase (or the driver, not sure) will convert it before hitting the converters.
But I’m talking about the internal precision, 32 or 64 bit float, not the project bit depth.
You told that the first thing Cubase does when playing audio is to convert it to the selected 32 or 64 bit float, then it remains like that for the whole signal chain.
I am asking if the signal remains at 32 or 64 bit float up to the interface, or if it is converted back to the selected project bit depth before going to the interface.
I am asking because I made that post and I want to have additional details because I got told by a user that it was working like this.
It is converted to the bit depth you set in the driver software and I’m pretty sure it’s the driver that converts it. If your driver software doesn’t allowing you to change it then assume that your interface is always working at 24bit and then whatever you throw at it is converted to 24bit.
The internal precision is converted to the project bit depth at the output.
There’s a big advantage in using 32 bit float rather than 32 bit integer. The signal can go beyond 0 dBFS and there is no risk of clipping. This data is stored in the 32 bit float file, and means that you can always recover the data beyond 0 dBFS by simply lowering the gain. The audio will never distort.
I don’t know if 32 bit float SNR is -144 dB. Another website says it goes much lower than this.
I can’t find a definitive answer.
I have found this website