Discrete Math or Other Variance Rendering VST Performances in Different DAWs?


This post is not about whether the straightforward addition in summing differs between DAWs. It’s also not about the differences between obvious listed features such as 64 bit vs 32 bit.

I want to know whether there is some kind of proprietary math, such as discrete math, that happens behind the scenes when translating the output of a VST instrument of even VST effect into its initial state then to a rendered .wav file. The reason I am asking is because I have been listening specifically to the VST output of the same presets on the same instruments in Cubase versus Bitwig with all of the settings set at intelligent parity and definitely notice a different in sound character (maybe also quality). It’s impossible to do a null test because I can’t seem to find a VST instrument that renders without randomization, even when I render twice within the same DAW.

I have null tested a 3rd party compressor run in both DAWs working on a standard .wav file, and that seems consistent enough (there is a slight -90 db or so difference on the output on the master channel for just a brief moment).

I can also null test both DAWs successfully when summing multiple tracks of rendered .wav files.

Additionally, I’ve tried offline rendering the VST instruments and then just listening to their outputs (and they’ll be simple outputs that shouldn’t be affected by MIDI grid resolution), and I can still definitely hear a difference between the DAWs. So what gives? Is my computer rig contributing to this difference primarily, or is there actually some kind of proprietary math that happens behind the scenes that actually makes Cubase sound better (at least with Diva and Serum, which is what I was testing out).

You can keep your secrets haha. I just want to know if SOMETHING contributing to the rendered character of VST instrument sound is happening under the hood that is sophisticated in a way where other DAW companies wouldn’t have blindly stumbled across the exact same method.

I know that there is that SRC comparison website between DAWs (http://src.infinitewave.ca/), but is anything like that relevant here? I also play hardware synths, and I’ve learned some wild stuff about them over the years e.g. the sound quality of a digital (hardware) synth is dependent upon how close the power going into that synth is to a pure sine wave, with some of the higher end synths having some kind of transformation of the wave shape built into them, but it’s night and day to be plugged into good power isolation. So, are there similar types of tricks when rendering VST synths within a DAW where the connective tissue of the DAW and its architectural support for plugins actually makes a difference?

I want to know so that I can tell if my computer is doing something or if something is actually happening. If you can give me some idea of what these proprietary techniques are then that would be even more reassuring.

Mr. Holypancake

Nah don’t count on that. Doing anything special would only give a wrong result. The VST output is intended to be rendered 1:1.

Any sample-based instrument with round robin turned off should give you a result without randomization that you can use in your tests.

The short answer to your question is: no, there shouldn’t be a difference. The goal of a DAW is to transparently render the audio. There are plenty of artifacts you can add of course, but unless you’re dithering or adding effects, a DAW does not intentionally color the sound when rendering without sample rate conversion.

Do a ABX test, if you can tell the difference repeatedly in a high percentage of times.
(50% would just be as good as guessing)
Then maybe you are on to something, if not, your mind really wants Cubase to sound better.

It can be something as simple as 1-2dB difference in the output levels between the two. Everything else being equal the marginally hotter one will be sound “better.”