Playback : Dorico vs Cubase

From my first listening, I heard a difference. But it may be subjective. What I know, and there I did dozens of tests, is that Cubase (and Samplitude, I must say) is superior to Pro Tools (and Reaper) on rendering instruments (among other things), at least of the classic type. Here, we are in nuances of algorithms. But if the driving force behind Dorico and Cubase are the same, and we stick with Steinberg, the differences may be in the general setup of one and the other. I am really interested in taking the experience further. Not because I like to waste my time, but because the ideal, from my point of view, as I have explained elsewhere, is to be able to get the full and pure playback (before mixing) by Dorico. We’re not far, I think. Multiple outputs (I use the 6 mics for example for the Sacconi Quartet, and more in Spitfire’s BBC PRO - which needs to be bounced into different files), plus the controller staging used for each instrument (this that Daniel S. planned in the development). Dorico opens up fantastic avenues.

OK fine I’ll give you the royal treatment then, type “dorico cubase backend” into Google and click the second result :smiley:

Could you be more specific in your question?

Unless you’ve done a double blind test, you basically tested your bias.

It is.

Look, I’m sure you think you hear a difference, but what you are saying is in essence:

“There is a perceivable difference in sound quality between cubase/Pro Tools/Reaper sending the same byte stream to VST to render instruments”.

DAWs don’t render anything. VSTs do. And for the same input, using exactly the same configuration, they will produce the same output. So either:

  • your confirmation bias kicks in
  • there is additional pre or post processing done by the DAW which you are not accounting for or
  • VST configuration is not exactly the same

That is true, but the DAWs mix all the renderings together. So at the DAW’s main output you might do have a slightly different sound.

It’s a very old discussion, like the recording itself. On the one hand, there is the response of theorists. They are not wrong … in theory. Then there is the response from practitioners, at least from some practitioners. The same people argued a few years ago about the usefulness of 32 bit floating point VS 24 bit, and 96 kHz VS 44.1 kHz. The professional standards (in classical music) today are 32/96, if not sometimes 192 kHz. Still, there was supposedly no difference (especially between 96 and 44.1, because 32 bit is technically useful). I have colleagues who record star symphony orchestras, in Canada, the United States and Europe, and who hear differences between DAWs (many choose Pyramix for this - albeit for other reasons too). DAWs calculate, they’re not just interfaces. They are quite close to each other, but they are not the same at this level. Otherwise they would all have the same engine and the same algorithms, which is not the case. But we digress (it’s kind of my fault!), Because my original question was about Dorico VS Cubase. And there is a good chance that you are right on this question, with the reservation of what Ulf answered. I don’t have time for a big week, but I’m going to do my more advanced tests, the goal of course being to get the best bounces.

PS "I forgot to answer your important point: the Daw does not return anything, it is the VST that does. This is a very good point. But the VST (or the AAX) is hosted, it cannot be independent from its host, right? I would like a technical confirmation of that. Thank you for the information.

Forget about listening tests. Just export the audio and compare the two files. Either they are identical, or they aren’t.

I agree.

The main reason for the switch to 96/24 (I haven’t seen 32bit in classical music recording yet, honestly…) is that it sells better, and hard drives got cheaper and faster. Also there are some technical reasons, which are minor though.
Classical Music Enthusiasts are also very often audiophiles, which love 96k/24b recordings (or even higher), the reason is beyond my understanding :wink:

this might work insufficiently, because MIDI clock isn’t as precise as a samplerate. Depending on where your samples fall, as well as MIDI jitter, they can differ in their waveform.

That’s the whole point. If MIDI jitter etc are different between Cubase and Dorico, that’s a real difference. If either program produces two different audio files if you export the same project data twice, that’s also a difference (and a bug, IMO).

Exporting audio should bypass hardware issues like clock jitter in your audio interface, and subjective differences like audio volume settings (psychoacoustic experiments show that “louder sounds better” subjectively, even when you can’t distinguish the difference in loudness objectively, for example)

Well, there is a market for monster cables, so there’s that :laughing:

There is a solid technical reason why 96k samples/sec gives more accurate audio than 48k. In any digital system there is noise caused by quantizing each sample to an integer value. In 96k that noise is spread across frequencies up to 48kHz and therefore at least half of it is inaudible.

For 48k, the same amount of noise energy is all contained below 24kHz, so it has twice the amplitude and almost all of it is audible.

In fact, using noise shaping algorithms you can get much better quantization noise reduction for 96kb than the explanation above suggests, but the details are beyond the scope of a “non technical” explanation.

A different argument is that the assumption that “you can only hear audio up to 20kHz” is not really true. You can only hear audio up to 20hHz using your ears, but that ignores sound transmission through bone, etc. The simple experiment of holding a mechanical watch between your teeth demonstrates how efficient sound transmission through bone is, compared with transmission through air.

Some musical instruments generate a significant amount of audio at frequencies up around 100kHz (i.e. well beyond the 48kHz limit of 96kb digital data), especially in the transients at the start of each note.

This is all pretty subjective, to put it mildly. People have done null tests in Pro Tools vs Reaper etc., rendering a project in Pro Tools, and then in Reaper, and then importing both renders in Reaper. After reversing the polarity of the wave form of one of the tracks, they cancel out. Meaning it’s exactly the same audio information. People claiming that one DAW ‘sounds’ different … I tend to take it with a grain of salt.

If the audio engine in Dorico is the same as the audio engine in Cubase, then the render of the same midi information should be the same as well.

Where the differences come in - both in Dorico vs Cubase, or Reaper vs Pro Tools or anything really - is that the plugins used and the way midi is automated, is almost never the same, and that causes differences in audio.

There will always be a market for snake oil.

I used to do a bit of “moonlighting” electronics assembly work for a guy who had built up quite a reputation among “audiophools” for his hand-built audio amps (made with specially selected brands of components, of course). He had no delusions about the fact that he was selling snake oil, but if people were prepared to pay £2000 for a product that cost £100 to build, that wasn’t HIS problem.

You mean 96kbps (kilobits per second) vs 48kbps? As in MP3 encoding (or some other compressed format)? 96kbps MP3 sounds pretty awful.

As I said, there are some minor technical reasons, non of which really matter to the end user. (Also wordclocks for sampling are a little more stable at 96.
And MIDI-Jitter is something else than D/A-Jitter! since your DAW has to render it in some sort of linear process, it still matters, although it’s faster than realtime.

All frequency tests have been conducted with sound-through-air transmission, as this is how we enjoy our music in 99,9% of the cases. It’s like saying we can taste color, when we put the ink into our mouth…

If you use a VST with a round ribbon system, it’s not guaranteed that the same sample will always be triggered at the same point. This and other randomnesses (algorithmic reverb) that can be implemented make it quite unlikely, that you will have a sample-exact copy with each render.

I didn’t say you could hear anything above 20kHz as “a frequency”. But that doesn’t mean your brain doesn’t react to it.

The frequency content of musical instruments up to 100kHz is sound-through-the-air transmission. Otherwise, microphones wouldn’t detect it.

The point is that ears are not the only human organs that act as a sound transducers.

No 96k samples per second vs 48k (with the same bit depth per sample of course). Sorry for any confusion (and the sloppy proof reading).