Insert EQ plugin (e. g. Frequency). Set the first 4 bands to +12 to +16 dB gain. Don’t use LIN.
Insert SuperVision. Select SpectrumCurve. Watch a lot of noise below 1 kHz at -80 dB.
If you move SuperVision to a post fader insert slot, you may also watch the channel EQ doing the same (remove the EQ plugin before). If you look at the graphic EQ plugs, every band in use increases the noise floor.
Pro-Q does not produce any such noise, even my old Sonalksis SV-517 doesn’t, and the Voxengo Curve EQ included in Cubase does not either. But every Steinberg EQ adds unwanted noise to the audio path. Why? Please stop that!
Some may say, it’s below -80 dB, noone will hear that. But
it sums up if you use many EQ plugins and channel EQs in a project and
it could increase a lot if you add a compressor plugin behind an EQ.
Happens in 12.0.70 with 44,1 kHz and 48 kHz sampling rate.
Happens if processing precision in studio setup is set to 32 bit.
Does not happen if processing precision in studio setup is set to 64 bit.
So I assume there’s something seriously broken in Steinberg’s EQ plugins if used in 32 bit precision mode. No other EQ plugin produces something like this, even the 20 year old SV-517 doesn’t, nor does the free ReaEQ.
For the time being, using 32 bit processing precision seems not a good advice when using Cubase.
1 - I’m not so sure it’s that easy having looked at it real quick. I did what you did for one track and sent that out to a group but had the meter on the group (Insight). Then as I duplicated the source tracks of course the absolute level increased in the group but the noise level didn’t go up that much. About a dB / added track. So considering then that you would likely turn things down as the level of the signal you want goes up I’m not sure it’s a big problem.
2 - Doesn’t seem like it.
My guess since it supposedly changes with processing bit depth (didn’t check) is that some native plugins “dither” (notice the “”) at some points. This would explain why it’s not entirely consistent behavior - meaning the noise level doesn’t change the way we might expect it to.
My guess is that the 32-bit mode will add a small amount of noise to prevent the IIR filters in the EQs to go denormal. This is a technical term where “too quiet” parts end up throwing the CPU off the fast path, and suddenly tanks CPU performance, causing sudden processing glitches. A little bit of noise will prevent this, because the signal is never too soft.
However, a developer will typically add this at about -120 dB in a 32-bit floating point system. Seeing it at -80 dB seems rare, even with a +18 dB gain in the EQ.
The fact that it goes away in 64-bit mode, is further strength for the theory, because you’ll typically add it at -240 dB for a 64-bit pipeline, and no matter how much you crank the gain, you won’t get it up to audible levels from there.
The reason you don’t see it in those other plugins, is likely that those plugins always use 64-bit processing, even when the pipeline is in 32-bit mode. It used to be, 32-bit processing was a lot faster, but these days, there’s less of a difference between the two, so third party plugins (and some native hosts) won’t bother coding both cases, and just always processes with 64-bit data internally, converting to/from 32-bit on the way in/out if needed. The drawback of doing it this way is a slight additional CPU usage for the conversion, compared to the fully 32-bit pipeline.
So, I’m not particularly surprised, other than that you see it at -80; I’d think you’d see it at about -100 in the situation you describe. Maybe it’s actually at -100 dbFS, and the Cubase “clip margin” is something like those 18 dB of difference?
If you don’t like it, just run everything in 64-bit mode. It’s the modern and safe thing to do.
Linear phase EQs use a different algorithm and are very useful in certain situations, but not as bread-and-butter EQ. Obviously, the LIN algo in Frequency is not affected by this bug. Every other EQ in Cubase 12 is.
Not sure if this can be considered as a denormalization strategy. Sascha wrote quite detailed about this in the description of his Normalizer plugin. Adding broadband noise was none of his solutions. But even if a developer decided to take this way - why should they add noise that loud?
From a Cubase user’s POV, it’s nonsense to first use the HP filter of the channel strip to get rid of unwanted parts of the signal (for example < 80 Hz), and afterwards using the channel EQ in a completely different frequency range will add noise where you were removing it a few moments ago.
In the meantime I tried some more third party EQ plugins. Not a single one of them shows this behaviour. TBH, I consider this a serious bug in Cubase’s plugin set. Using 64 bit processing is a workaround, but it’s not the default setting in Cubase, and I have yet to find a Steinberg document that warns about noise < 100 Hz when using EQs with default processing precision. Can’t be meant to be like that.
The 24 bits of mantissa in a floating point number puts the quantization error at 0 dBFS at about -120 dB. I could see someone wanting a dozen decibels of dither, and then cubase adds 18 dB of “real max” clipping headroom, and you add another twenty dB of gain on top of that, and, lo! that’s what you get. Not great, but not “unreasonable.” (you could avoid the denormal problem with much lower dither/noise, because it doesn’t need to work at the “full scale” level, but if these functions are from very old versions, it’s likely they will keep doing what they’ve always been doing, and they might have done this initially.)
It’s more than a workaround, it’s a real solution, and, in general, the best way to ensure you get highest sound quality.
Same thing as you’d probably want to use a 24 bit sound interface, rather than a 16 bit sound interface, if you care about sound quality, and use gain staging.