try testing with a known test signal like white noise as done by @Johnny_Moneto. This will give a clear display of what the Frequency Cut 96 filter is doing to the signal.
instead of using Audio / Spectrum Analyzer to monitor the result, insert the Supervision plugin in the inserts of the track directly after the Frequency plugin and select a Spectral Domain / Spectrum Curve display. This will allow you to see the results of any changes you make in the Frequency plugin in real time.
“Cutting” is kind of a misnomer because it’s not really “eliminating” frequencies below the cutoff point with a standard EQ - it’s just greatly reducing it by the slope amount. There will still be sound under there; it just won’t be in the way or matter at all since it’s not building up.
Helpful tip: Don’t actually use a roll-off at all for the low frequencies. Leave it alone unless you’re listening to the mix in context and it needs to be filtered out. And then if it does need to be, use a low shelf instead of a high-pass filter because it doesn’t mess with the phase shift as much. Whenever you EQ something, the nature of EQ will create a phase shift that you may not need had left it alone. Many people don’t realize this and they start cutting everything because they read it on the internet or were told by people who don’t understand phase relationships.
Then if you were to use a linear phase EQ, you would be trading off phase for “pre-ringing artifacts” which can negatively affect your transients, especially if you’re using it on a kick drum.
It does work, you’re right. The question is, does it work better?
There’s a great video by Andrew Scheps where he talks about the philosophy and mindset of mixing, which is more of what I’m interested in. In it, he gives an anecdote about how he was working on this one song and he realized by listening to all the tracks together (before cutting out anything in solo mode or any of that) that…oh heck, I just went and found the video for you to check out, and I’ve queued it up for you:
Watch the video where I queued it, and then pay close attention at 7:00 until the end of the video and he says something profound that will make us all better engineers for the rest of our lives.
Many thanks for suggesting using white noise. I did the following.
Step 1 - generate white noise
Step 2 - use the 96dB low cut at 100 Hz. As can be seen from the attached screenshot, the result is not as neat as it should be. Admittedly, the dB curve starts dropping from 100Hz downwards. However, assuming the 96dB cut is correctly applied, starting at about 10dB at 100Hz, the curve should reach about -85 dB at 50Hz. Far from being he case here. So what is the problem: the EQ plugin, or the Audio spectrum analyzer?
Step 3 - send the result of step 2 through a 96dB, low pass at 50 Hz. If the EQ works properly, the cumulated result of Steps 2 and 3 should be that only a tiny residual of the initial signal remains. From the screenshot below, this appears to be the case.
So the EQ appears to do the job properly. In contrast, the Audio spectrum analyzer is rotten. I simply wonder what makes it so bad to correctly analyze low frequencies. Any explanations from the developers?
What I’m doing here is software testing. This is done best when one knows the result in advance and can clearly identify any failures. In that respect, the 96dB LP/HP is pretty helpful.
The reason I did it is that I could not understand why the outputs I was getting from the spectrum analyzer were not really in agreement with the filters I knew had applied.
Yes indeed. I simply wonder why.
Usually the error grows with the sampled frequency because the FFT has fewer and fewer points to work with within a single signal period. The lower the frequency, the more the sampling points per period, so the performance should be expected to improve.
Does Cubase use a particularly weak algrorithm?
The audible frequency range is calculated with a given number of points that are even distributed over the range.
The distance between the points is a fixed frequency value, so lesser points at lower frequencies.
The distance between the points is a fixed frequency value, "
Precisely. So assume the sampling frequency is e.g. 10kHz. A signal with frequency 1000Hz will be sampled using 10 pts/period, while a signal with frequency 100 Hz will use 100 pts/period. So I would actually expect the reverse.
It is better to think of “ranges” and “bins” than in “points”. Simplified, an FFT divides the spectrum evenly into bins of a width of n Hz. The octaves though are not spread out evenly across the spectrum, say the range from 5000Hz to 10000Hz is one octave, but with a theoretical bin width of 100Hz contains 50 bins, the range from 100 to 200 Hz is also one octave, but contains only one bin, so has less precision in that range.
See e.g. c - FFT bin width clarification - Stack Overflow
It is all very much expected imho, a FFT analyzer is never 100% precise and there are always trade offs between precision, speed, cpu usage, there are different window functions for different purposes producing different results, maybe some kind of smoothing or averaging for display on a screen etc. Use the Supervision Analyzer or the free SPAN for more configurability.
What is more interesting imho is that the display of the simple Cubase analyer shows positive dB values. I wonder what reference they use.