Since no one understood my initial post, it became a discussion about imperceivable distortion

Renaming this thread to

“Since no one understood my initial post, it became a discussion about imperceivable distortion”

Judging from the implied expectation I’d say your view of humanity is charmingly innocent…




(and yes, I ‘feel your pain’ btw)

I think what he means is that you’re not only adding the distortion but the part of the signal that you actually care about. Think about it like this:

98% of your signal is good. 2% is distortion (ignore the actual numbers now, they’re just to make a point). Now let’s say you add 9 more signals with exactly the same distortion ratio. You would get;

98+(9x98)=980
good signal

2+(9x2)=20
distortion

What’s the ratio between 980 and 20? Exactly the same as 98 and 2. 2%. So if you add a bunch of signals that all have 0.0002% distortion then because you aren’t just adding the distortion but also the other 99.9998% the ratio remains the same.

The importance of that argument can be seen if you look at what Larioso wrote:

The word “pure” sort of implies that that’s the only thing that happens, but what Jarno is trying to point out (I think) is that it isn’t “pure” or the only thing that happens at all. Why would you boost the signal in the first place? The reason is because of the other 99.9998% of the signal.

So if you have for example a guitar track with that level of distortion, and your guitar is peaking at -20dBFS with an average of -30dBFS (just as an example), adding another guitar track with exactly the same characteristic will indeed boost the distortion in this hypothetical example, but it will also boost the signal we care about, the one at -30dBFS. And if you keep doing this, 10 times or 100 times, your average level will become so loud that you have to turn it all down. And because the distortion is a fraction of the signal we care about that distance won’t change relative to total loudness (once everything is summed). So you’re more or less back to where you started.

The important point to always remember is that this distortion that lives far below the “instrument” lives below the “instrument”. Raising one raises the other. They both live “in the same signal”. And the limit here is how loud we can raise the level of the “instrument”. We won’t ever make it so loud that the distortion matters. That’s the point.

PS: The addition of signals and how much gain you end up with at a given frequency is highly related to whether or not the signals are the same. And when it comes to distortion one has to consider whether or not the distortion is correlated to the input signal or not.

And therefore we should continue to add distortion other places too - even when not needed?

My point is that we should eliminate what can be eliminated!!!

And also - since using many hardware emulating plugins I am fond of distortion from plugins being even harmonics - adding nice fluff to it all. :wink:
So we want distortion that we control as mostly pleasent sounding.

And what SRC do with pure sine sweep can be seen in my charts for RMAA test - is not something that we control content of.

Great writeup, thanks MattiasNYC! It’s brought me to the point where your PS, which I’ve bolded for reference, seems to me to be key. If by chance the distortion “piles up” at a certain frequency, then it will be audible more than otherwise. Since it seems that is certainly possible, I understand the concerns expressed that under certain circumstances Cubase’s lower quality SRC can indeed affect what we hear.

Is that point of view unsupportable in some way?

What DAC do u use?

I actually do think it is unsupportable. But I’m also taking the word “audible” literally.

If you’re listening to music in a decent room, your ambient noise may be anywhere from 30-50dBSPL (I’d have to double check that). You take whatever signal you’re listening to, whatever music there is, and you make it loud enough for you to enjoy. Since we start experiencing pain and discomfort around, what, 115dBSPL?.. you won’t have your 0dBFS above that. And then you look at where this distortion is relative to that 115dBSPL.

So far it’s been shown to live at around -120dBFS or so. That’s below maximum level, meaning you can just go down 120dB from your maximum 115, and that’s where the distortion is. Then look at how much room noise you have on top of that. So even if we ignore the music you actually want to listen to just the room noise will cover the distortion.

The question really should be by just how much this distortion has the potential of building up. Just because you have distortion X at time Y doesn’t mean it’ll sum with other distortion at the same frequency, because in order to do so it’ll need to be in phase. And how often does it happen? If you’re playing a guitar part, how many times will you play a specific note that generates the exact frequencies at the exact same phase as something else so that the distortion adds instead of subtracts? So in a ‘perfectly bad’ scenario it’ll always add up, but how many tracks at how much gain does it take for that to go beyond room noise (and other system noise) and then not be covered/masked by the actual signal/instrument we care about (the guitar or whatever)?

I just don’t see how this is anywhere near audible in a real world scenario.

And as another aside:

I think there are two more points to make considering what’s been said earlier:

  1. We actually don’t design to make SRC or anything as good as possible, we make it as good as we can given other considerations. There are always compromises to be made. If it was possible to make a perfect SRC without any “penalty” everyone would be doing it. Everyone. Same thing with ADACs. If we could make them perfect without much effort they’d all be by now. I think the truth is that we get to a point in a system where one component or process is so good that making it theoretically better won’t help us practically, and I think this is one such case.

  2. That also means that discussing a bunch of other “details” won’t generalize and mean that just because tweaking another “detail” that adds up is beneficial in practice doesn’t mean that tweaking any detail will yield the same benefit. It just doesn’t follow.

Again guessing it’s me - you have not found the quote button yet. :wink:

I followed this project:

Bought the same DAC for $100 on eBay and bought caps for about $80 to OSCONS and on top of those tantal which made the final touch just lovely.
So a combination of his results. This guy is building DAC’s starting from $1500 or for sale.

I loopback tested RME card against my own into RMAA also - and my mods added a distortion number in third of fourth decimal compared to RME.
I also found that some folks has modified RME for even better result, googled around. Both follow RMAA so close that is hard to distinguish.

My build here(not english)
http://www.faktiskt.se/phpBB3/viewtopic.php?f=3&t=35665&p=1760579#p1760579

Do me favour and post an analog loopback plot with the same scope as your measurement. Curious for your noise floor and how it distributes acros the frequency spectrum. Let’s continue the dicussion after that.

Your assumption is that the particular frequency also has any good audio that in turn can mask artifacts.
This we cannot know - it may be at arbitrary frequency not present otherwise in that file - thereby my wording “pure” talking about artifacts alone.

And what actually happend as we process this audio further we know even less.

And we don’t know phase of every artifact to know if it actually add or remove something else - of artifacts or good audio signal.

But just good practise to eliminate any added artifacts - and it is so simple thing to do - since everybody did it already.
I assume the SRC is done as separate call when doing import and discover than samplerate does not match project.

I downloaded the code the Secret Rabbit Code - which is public domain for own use.
But had issue hunting down every other library needed - and finally what I found crashed when I started it(not sure what problem is).
I’m sure it can be licensed.

They also show development from earlier versions on http://src.infinitewave.ca/ and first v0.1.8 you can see no artifacts whatsoever in chart.
On site is the v0.1.9. Everything is below -140 dB.

I thought I would buy a Behringer rack module for SRC among other things, but have not been able to see if that is on par with r8brain or other software.

Interesting points that I need to ponder more before understanding completely, thank you. I must admit my gut instinct on this is heavily influenced by the chief technical editor at SOS’s strong criticisms of Cubase’s SRC. My thinking up to now has been that when we are entering the realm of, “… it probably won’t add up to be audible” then it really is incumbent on the DAW maker to include a much better SRC if it isn’t too much trouble (which it clearly isn’t, since all or almost all the other DAW makers have done just that, and indeed Wavelab has a much better SRC than Cubase does). Maybe I’ll feel differently after considering your post more!




As above, most SRCs are much better than Cubase’s, including Wavelab’s, so what seems odd at best (and possibly disturbing in the worst case scenarios) is looking everywhere else and seeing better ones than Cubase’s. I don’t believe anyone is bemoaning the lack of the hypothetically perfect SRC you have introduced into the discussion!

No that’s not what I’m assuming. It’s not about one signal having both distortion and non-distorted content at a given frequency, it’s that regardless of that you’re going to have WAY louder components in that signal at other frequencies. Again, just look at the normal noise floor of a room and figure out just how loud a signal has to be in order for you to hear this distortion. It gets real loud real quick.

Yet you seem to assume that it will add up and cause audible distortion?

And at what level do you suppose dither lives?

It’s really up to whomever is complaining about distortion this low to show just how it can overcome all other content when added, to the point of being audible.

This.

https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)

If SRC can be done better than it is done at the moment, then it should be improved. There is absolutely no reason to accept it like it is just because it may not be audible in most cases. Cubase promotion says it is a professional product. Best Cubase ever. Professionals don’t accept tools that have proven to give worse results than others. This result is absolutely no advertisement for Cubase’s/Nuendo’s audio engine. Who knows what else might be realized worse than it could be soundwise?

Why stop at 64-bit internal floating point calculations when we can have 256 bit accuracy and beat out all the other DAWs? Gotta think ahead more, you guys.

Nothing is too much trouble for anything for which someone else has provided something better, except as I said when you factor in everything else. Wavelab doesn’t have ADR functionality or about another million things that Cubase and Nuendo has. The question isn’t whether or not it’s “too much trouble”, it’s how much money is it worth. Because messing around with that will take development time away from other things.

Ask yourself this:

  • How many users run convert multiple audio files?
  • How many of the previous group convert enough of them for it to significantly add up?
  • How many of the previous group can hear a difference?
  • How many of the previous group would give up what functionality for sample rate conversion?
  • And then how many consumers notice in the end?

The cost/benefit analysis is less than clear I think.

The concept of a perfect conversion wasn’t addressed at you specifically, it was implied by Larioso.

Bring back the “like-button”!

Why? it has been proven that a 64 bit fp engine gives no measurable advantage in audio quality, just even more unused headroom than 32 bit. regarding SRC, it indeed IS measurable that the signal is getting worse when converting it.

I thought the reasons given or implied were pretty convincing actually.

You’re just dead wrong about that. Professionals do exactly that all the time. Why? Because to a professional the bottom line is getting the job done within specific parameters.

I think you just proved my point actually.

I understand that this is as a peace of mind thing. It’s nice to not have a worry on your mind even if it’s insignificant. But would the development time spent on this not be better used on actual noticeable improvements? In the end, my mind is more at peace if something like this doesn’t get an elevated priority.


I added you to my friends list because of your earlier reply if that counts.