The SRC algorithms in Cubase (versions 4-8) are fairly poor, …
I’m pretty sure I don’t entirely understand the graphs presented in the first link, but it seems there is a lot of non-black (i.e., quantization noise), and aliasing, in Cubase 8, compared with, e.g., Protools and yes, even Reaper. Further, if I’m understanding the graphs correctly, Wavelab’s SRCs rock, but Nuendo’s seems identical.
So are people who are saying that DAW A just “sounds” better than DAW B actually hearing the difference in SRCs? Have they had a valid point all along?
How much of an effect does this aliasing and quantization noise have on what we hear?
I don’t think this ‘issue’ would arise if your project is at 44.1, then mastering to 44.1 (CD format). In other words, not changing the sample rate.
But if are converting to another sample rate, well then… interesting .
But not really something new, SRC quality have been discussed to death over at gearslutz for many many years. But I wouldn’t mind Steinberg taking another look at SRC that is unchanged since Cubase SX I believe.
I use Voxengo r8brain myself, just to feel better, because I have yet to notice any sound difference.
I’ve been wondering about that part … is it certain there is no “under-the-hood” SRC going on as a part of routine operation, even without upsampling/downsampling?
But more to the point … I haven’t read anything that says Cubase’s aliasing effects are audible … in other words, though the “Sweep” graphs in the link at the top of this post show Cubase’s aliasing to be very apparent visually, how audible are those differences … what dBFS values would the alias bars be on a frequency/amplitude plot?
Maybe they’re not sonically significant, analagous to Cubase’s SRC’s very visible non-linear distortions in the same plot, but all of them being well below audible range (-120 dBFS).
Thoughts from any geeky engineering types? … I look up to you!
When looking at the Infinite Wave graphs please spend a moment trying to understand them. Distortion figures of all the converters shown there are completely inaudible, except for some of them which may create just audible distortion in very low signal levels. (Cubase is not one of them. It handles low signal levels well.)
To put things in perspective. Cubase’s SRC’s distortion levels are about 1/100th of CD audio quatization errors.
When it comes to aliasing, even Cubase’s SRC’s filter goes down to -60dB at 24kHz which is about the point it starts to matter (aliasing to under 20kHz). You’d need horrible amount of ultrasonic content in your audio to make it’s aliasing effect audible.
Just for the same reason I use SoX on final masters. For “pre-release” stuff I use Cubase.
Yes, that makes sense. I wonder if any of Cubase’s functions use “under-the-hood” upsampling/downsampling , along the lines of what some of the UAD-2 plug-ins do?
Also raises the issue for all those other plug-ins that do the up/downsampling thing. UAD-2 of course changes the sound intentionally (to mimic hardware), so any artifacts are “built-in” and presumably desirable. But what about other plug-ins that up/downsample automatically, but are meant to be transparent … I guess we are at the mercy of those SRCs as well.
All only important if the SRC aliasing artifacts are audible, which some here have suggested they absolutely are not, while the well-respected Technical Editor at SOS, as well as the link at the top of the OP in this thread, suggest can be.
Thank you, I have done that. I didn’t think my post gave the opposite impression, if it did, please understand that was an error on my part.
That is something I recognized, and which I meant to convey when I wrote: “Maybe they (referring to Cubase SRC’s aliasing) are not sonically significant, analagous to Cubase’s SRC’s very visible non-linear distortions in the same plot, but all of them being well below audible range (-120 dBFS).”
Yes, but Cubase’s SRC transition filter is shown to only go to -6 dBFS to -12 dBFS at 22-23 kHz. I could easily see that possibly causing audible alias bands. The following quote from the SOS article I linked to suggests the possibility that Cubase’s SRC filter is not that good, maybe that is what he is referring to:
The filters in most modern A-D converter chips are not quite as steep as those in decent software SRC’s, and most actually allow some aliasing because they are only about -6dB at the Nyquist frequency. > In contrast, most decent SRC algorithms have much better filters that really do stick to the rules (Cubase doesn’t> , but R8 does!).
Here are some plots showing SRC performance, for Cubase and another DAW, from the link at the top of this post:
Cubase’s transition filter.
Ableton Live’s transition filter.
On the Sweep plots below the curves are said to represent aliasing of the ultrasonic signal (the kind you’d get with close-miked brass, high strings, cymbals, etc. recorded at high levels) back into the audible band. Also, in terms of the background color, “black is good”, non-black said to represent quantization noise:
Cubase 4,5,6,7 and 8 Sweep plot.
Ableton Live’s sweep plot.
**To me, the question remains: How do we know if these graphically obvious differences from ideal, and between different DAWs, are audible?**I’m guessing when not recording sources with a lot of high frequency content at high levels, it should be just fine.
Why not use Cubase on final masters, as you make a strong case that any artifacts of Cubase’s SRC are inaudible?
Thanks much for clarifying, I always get a lot out of reading your technical explanations -
[Edited for clarity on the transition filter and to include graphics, sorry I had trouble with the graphics].
It wasn’t response to you, but a general notification. If it looked like an attack against you, I’m very sorry.
But it doesn’t. When converting into to 44.1kHz sample rate, audio in 22-23kHz range aliases into 21-22kHz range. I cannot consider it audible (unless we start talking about intermodulation distortion generated by the analog audio chain). There is a very good reason, why original CD audio standard is 44.1kHz instead of 40kHz: insurance against non-perfect anti-alias filter.
We know if we stop for a moment to think about basic physiology of human hearing.
EDIT: just to clarify:
It’s absolutely impossible to hear distortion below -60dB of current signal level
We cannot hear anything above 20kHz (for me as an old relic this is more like 15kHz)
For the exactly same reason as peakae does use r8brain: just to feel better. Not having to worry about some exreamly unlikely situation that I have not taken into consideration. Just like:
a. I record (and mix and master) at 88.2kHz even though 44.1kHz is fine (just in case there are bad behaving DSP algorithms in my signal chain)
b. I do dither when reducing bit depth even though none of the music I produce requires it (because it doesn’t cost anything)
c. I record 24bit audio even though at least in 99% of my recordings 16 bits will capture all the details (so I don’t have to worry about those 1% of cases)
OK, that makes a lot of sense, thank you. The thing that comes to mind though is summation tones … for example, referencing the Cubase Transition filter plot specifically: with no filtering (a 0dBFS response) at 21 kHz, and only -6dBFS filtering at 22 kHz, how audible would the resulting 1kHz summation tone be for that high-frequency source recorded at high levels?
Maybe it is the summation tones that are represented in Cubase’s Sweep plot by the visually obvious aliasing curves far below the audible frequency threshold … and the sigma of all the summation tones which form the basis of Hugh Robjohns’ comment (in the SOS link above, where he seems to imply the SRC artifacts can be audible)? Or maybe the math is such that even summing all the summation tones does not result in anything above audible volume thresholds, even for the closest mic’d loudest brass/cymbals/high strings, etc. …
Thanks again for your thoughts and discussion, Jarno!
Sample rate conversion has nothing to do with DAW’s “audio engine”. Audio engine is just a piece of software which delivers/receives audio streams to/from different processing algorithms (plugins), boosts/attenuates and sums audio streams. Basic math + clever scheduling of how to do these things. Only differencies between “audio engines” is this “clever scheduling” part and which kind of number rounding to use when converting dBs to additions/multiplications/etc.
IMO, DAW manufacturers should stop using the term “audio engine” as their advertising slogan. There is no sonical difference between different “audio engines”, because differences are (almost) always only in the least significant bit of the audio sample (if even there). Differences between “audio engines” is the performance; the way they have been implemented to do their job as fast as they can.
I know something about this stuff eh! (a little tiny bit) that’s why I used the quotes " but your clarification is good.
Although, since Steinberg (or any manufacturers) never fully explain what is changed with a new “audio engine” one never knows!
To be Honest, the little tech in my gets anxious and worried for test like this (I saw it about two year ago for the first time). But then I just heard a couple of crappy recordings/mixes made in profools or other “mayor” DAW and I stop worrying
OK! You went into intermodulation distortion path … because “Summation tone” of 22 & 21kHz (1kHz) is intermodulation distortion. But the thing is: intermodulation distortion doesn’t happen in digital domain, so you can ignore it. It doesn’t happen. Only worry we should ever have on Cubase’s very shallow anti-alias filter (when comparing to others) is:
Cubase’s filter isn’t very effective between 22-24kHz
If we have extreamly powerful audio content in this range (no natural instruments have, very few microphones captures it anyway and any well-programmed VSTi also doesn’t produce it) this content aliases into 20kHz-22kHz range
In analog audio hardware (D/A converters, amplifiers, speakers) this aliased audio creates intermodulation distortion in audible range (20Hz-20kHz)
How much intermodulation distortion is already present produced by legitime audio content (20Hz-20kHz)? Probably something like 1000 times more.
This is not an issue, except if your music is some kind of strange extreamly high-to-ultra-frequency rich stuff without natural low-to-mid frequency content.
YES! That’s exactly the problem. They promote their “new audio engines” and never tells us what does it mean. Basic DAW “audio engine” concept is so simple: you can do it “right” (it’s transparent) or do it “wrong” (it has sonic artifacts).
Of course there are sonic differences between 24-bit fixed point (old … eh … not so old ProTools, Cubase VST) which was bad, 32-bit fixed point (my mixing console) which is good, 32-bit floating point (modern Cubase and ProTools) which should be considered perfect for human hearing and 64-bit floating point (Sonar?) which is definitely overkill, engines but … if you advertise “new audio engine” every time you just change the internal structures of the software is plain stupid. Why don’t tell the truth: “audio engine with improved performance”? (or any other real reason why the “new engine” is better)
Maybe it’s because customers want to think the new version will give better sound quality. It’s sad. We all know about snake-oil products in Hi-Fi scene and laugh at them, but still some of us believe that DAW manufacturer would somehow manage to produce a sonically superior product compared to the older one.
psychologically reading cubase is poor on something (from the SOS article) especially for audio quality in general,makes me wonder about cubase’s “sound” …(few days ago i met in a studio that i work at a famous mastering and mixer in my country that argued with me that pro tools is better sounding than cubase when mixing and mastering and getting faster and better results… (well and old debate but i don’t see any reason for him to say it other than his real experience.!! i never tried protools though)
i do expecting steinberg to be better or as good as possible in audio quality with its programs.(even if its only numbers)
although we are talking here in -60dbfs or less, what if i convert all my tracks from 96 KHZ to 44.1 for mixing? what about the buildup of those aliasing in volume ? maybe then its apparent ? or maybe we can’t hear it but somehow “feel” it and it changes how we perceive it musically sound wise !