Cubase Internal Sound Quality Engine

actually we know that all DAWs output the same sound NULL, but some DAWs, especially Magix Sequoia and Magix Samplitude, maintain the phases and do not shift the phases due to their algorithms along the signal line and therefore, during internal listening, they play a very clear clean mix sound, but this is only valid for internal listening, but since all DAWs make the same calculations during export, the output of all DAWs after export rendering is the same, but this is only related to the internal situation.

When I started programming VST plugins myself a few years ago, I knew how VST hosts and VST plugins worked internally. According to Steinberg’s VST SDK (used by any VST-compatible software), any audio data transfer from the host to the plugin (and vice versa) is based on double-precision floating-point values rather than integers. Floating-point arithmetic is much more accurate than 16/24/32-bit integer operations. Therefore, there is no need for any algorithms to increase accuracy, which means that audio data is transferred in the same way across all DAWs without any additional processing. Addition is simply the summation of floating-point values, while audio control and automation are based on floating-point multiplication. Both operations are performed by the CPU, not the DAW itself. A DAW cannot do anything to improve the results (nor is it necessary, as the results of double-precision floating-point operations are as accurate as possible). Since all DAWs running on the same platform use the same CPU, this cannot cause any differences.

However, even floating-point samples do not fully represent analog sound; they are merely slices taken from the analog signal, with intermediate values ignored. You are correct here: DAWs use different methods to reconstruct the analog signal as accurately as possible and eliminate the unwanted effects of sampling. While the basic technique may be the same (jitter), the algorithms can vary. This could be one reason why DAWs produce different sounds. However, when it comes to professional DAWs, I don’t think we can talk about “better” or “worse.” They are simply different.

On the other hand, phase consistency and distortion are not something you need to worry about if no processing is taking place. This is exactly what you should pay attention to when comparing DAWs.

After all this, will it be possible to add phase consistency algorithms internally to Cubase, like Sequoia, in the future?
Because what is generally said about sound clarity is this:
the reason Magix Sequoia/samplitude sound so natural and open is because they use checks and balances for Phase linearity through out the signal path.

What makes you think this way? Is there some scientific reasoning behind this? Can you please provide links?
As far as I understand the DAWs do not reconstruct an analog signal, simply because they work in the digital domain. It is actually the audio interface that converts a digital into an analog signal. It is furthermore my understanding this happens in the same way no matter what interface you are using. There are some underlying laws of physics.

I would like to learn more if my current understanding is wrong.

1 Like

I am trying to understand why this conversation is taking place, which is why I asked. There are rumors that Magix DAW uses a trick found in floating point calculations to preserve phases along the signal line. Some users are doing this, which is why I was curious and asked.

You asked for a link. This topic has been discussed in detail over five pages, with various opinions shared. If you have time, you can review all five pages.

This is a topic for the lounge.

Or for AI to talk to itself.

Please don’t tell me I have to create an account there in order to be able to read anything?

1 Like

Can you provide an example of how and where this phase shift occurs in Cubase? I would like to test and measure.

You tell me to get up and show you the link, then you tell me what to do. Because of your rude and ridiculous attitude, I’m not going to talk to you anymore. I came here to ask a question, there’s no need for you to be rude and act ridiculous. Have a nice day. I gave you the link, if you’re so curious, go there, sign up, and read the discussions.

Have a nice day.

I politely asked.
Not sure, why you are so sensitive that you go full defense just because I don’t appreciate Magix keeping their forums closed for casual readers.

I have no problem with whether or not that forum is open to users, so why would I make an issue of it? I was simply asked for a link, and since the discussion was taking place there, when I provided the link, I was directed to the forum.

If you are pasting what looks like selected content from another forum (or AI) here, and asking people here to help you understand why the people on that forum are talking about it, then the accessibility of the source content you’re asking people for insight into should concern you, in my opinion.

I’ve read this thread from the beginning, but still don’t really understand what you’re asking - I’m a bit confused how in your OP you say “You are correct here” but I’m not sure who you’re talking to there or what statement is “correct.” My guess is that this is a combination of pasted material mixed in with non-primary English language translations, so the whole thing is a bit hard to follow.

Thus, having easy access to the source material you’re asking folks to render opinions on is even more important.

The question seems to boil down to this statement, which I also do not understand as presented. How would you have “phase linearity throughout the signal path” inside a DAW audio chain when there are myriad plugins within that signal path whose very function is to modulate the phase itself? What exactly are you comparing? And for what purpose? Maybe define what you mean by phase linearity for clarity?

I don’t know what the phrase “sounds so natural and open” means within the context of phase linearity; it sounds like a subjective description of audio characteristic rather than a metric for phase deviation fidelity.

Lastly, you may consider being a bit more understanding of the questions people will undoubtedly ask you, and expect some level of pushback in the absence of any source material for whatever you questions are. No one was being “rude and ridiculous” in my opinion. Heck, I’m still trying to figure out what your question is :slight_smile:

1 Like

If you use signal processing in the signal chain, this will alter the phase in many ways and to compensate all introduced phase shifts will not be possible since you can’t really say what is intended and what not.
Imagine a plugin that simulates a passive EQ, these EQ designs add phase distortion to the equalization applied. Or simulations of tape saturation…
To correct this phase shift will change the result, away from the intentional sound.

And the simple summing of correlated audio signals will not introduce extra phase shifts in most DAW’s.

If you play a simple audio file it should sound the same in Cubase and Sequoia if you use the same hardware to listen to it. The reconstruction of the waveform happens in the D/A converters, not in the software.

1 Like

Talking about DAWs is like talking about politics. People are civil as long as you tow the party line. REAPER and FL Studio communities are known to be cultish. It’s been a meme for years, and a lot of people avoid them for that reason. Few people will go there to critique that program. You’ll see them critiquing it on the forums for the applications that they chose instead, Lol.
That’s like walking into a political convention and asking what political party is the best. The party that owns/holds the convention is going to tell you it’s them. No meaningful information can be gained, because you cannot trust anyone to give you objective takes on anything - and most will give a partisan result due to peer pressure (otherwise the forum will drag them, as well).

On forums where only a subset of users are that partisan, you can just block and ignore them. Some forums are irredeemable and simply aren’t worth visiting unless you’re “in that community.”
AFA all the debate about Samplitude sounding different…or even saying it sounds better than some other DAWs…I don’t really find that to be anything unusual.
The important thing is to find WHY that might be the case…because if it turns out that maybe it’s some deep-rooted setting, or a plugin that behaves differently or is set differently in one DAW than another…or one of the many other variables…then you can’t say it’s the audio engine that makes Samplitude sound better.

Most people look at the “audio engine” as the code that deals with the basic mathematical computations…and math isn’t going to be different from one DAW to the next…so therefore it’s something in addition to the math.

It may turn out that other DAWs don’t include that “something in addition to the math”…and so they sound different.
In that broad sense one could justifiably then say that as a total package, with all the bells and whistles involved, Samplitude sounds better…but that’s a pretty broad generalization if the intent is to prove that at the core, the audio engine in Samplitude with its math, treats audio differently than other DAWs do.
So until the source of the “difference” is found…it’s just a lot of vague generalizations, even IF you can hear some difference.

Simple analogy.
Two identical car engines, but one gets a bit more horsepower…and people then say it’s the better engine.
However, no one notices that that engine used gasoline that had a slightly higher octane…so then it wasn’t the engine itself, but something in addition that made the difference. If you do the same with both, they have equal horse power.

What debate are you referring to? No one said anything about that at all. I can’t help but feel that you’re having a conversation no one else is having. But I hope you find what you’re looking for.

1 Like

We can’t reveal this clearly, yes, but we don’t have a claim, but many of my musician friends, maybe more people than I can remember, say that protools and samplitude play much more openly than cubase and logic when they listen to the internal (during daw internal play), so I opened this topic because I wondered if this could be the case because we can hear it, but why is this happening?

The last thing we were told is that the render export of all of them are the same ok, but during internal listening, they all play differently, so here I think they use a code in internal daw listening, especially protools and magix samllitude plays differently, we can hear this.

The legacy heros of fixed point processing will have the best sounding DAW on the planet.
No doubt about it. They dither everything.

Of course, Slow Fools will use da Vinci code.

1 Like

One could argue that the real advantage of internal monitoring in Samplitude over other DAWs lies in its use of a proprietary Synaptic Resonator Layer (SRL), which in fact has nothing to do with conventional DSP but rather exploits sub-Planck-scale photon entanglement to pre-condition audio streams before they ever hit your interface. By sending each channel through a pair of entangled photon modulators, the SRL creates a transient psychoacoustic bias in the listener’s auditory cortex, effectively “priming” neurons in Heschl’s gyrus to favor onset transients by up to 2.7 dB without altering the digital sum. This is coordinated via a neural-quantum feedback algorithm—what the engineers call the Auditory Cortex Phase-Lock Loop (ACPLL)—which monitors micro-variations in synaptic membrane potentials through an ultrafine capacitive coupling network embedded in the DAW and transferred to your CPU’s L3 cache. According to the Neuron-Spectral Interference Theory, these phase-locked loops generate fractal sidebands at 13.7 kHz that align with your brain’s gamma-band rhythms, boosting perceived clarity and depth. Of course, none of this exists in any traditional sense—but when you switch DAWs, you can almost feel the sub-atomic harmonics shift inside your skull.

Or not.

5 Likes

I do believe.

2 Likes

You wrote so many things, but I didn’t write anything that I couldn’t hear it and I heard it. And I heard it. Hundreds of people like me also hear it. Do you think so many people can fall into placebo at the same time? I think not. An ear develops for at least 30-40 years, an ear develops for at least 40 years. 100 people who have developed an ear in 40 years cannot be placebo at the same time.