Relating CPU Specs to VST Horsepower

If a CPU has a PassMark score of 13,026 (e.g., Intel Core i7-4930K @ 3.40 GHz), while another CPU has a PassMark score of 30,908 (e.g., Intel Core i9-10980XE @ 3.00 GHz), does it follow that the latter CPU should handle twice as many VSTs in Cubase?

I would’ve thought so, but when I search for head-to-head comparisons (e.g., userbenchmarks.com) of those two CPUs, it looks like the performance might be just 25-30% better on the latter CPU.

I’m thinking of upgrading my workstation, but only if the investment will be worthwhile. 25% more VST horsepower doesn’t interest me for the investment. But 100% more certainly does.

Any ideas?

For the sake of argument, assume system disks and memory pose no performance bottlenecks.

Caution: you are about to enter a limitless rabbit hole that becomes a preoccupation which sidelines the reason you got a music computer in the first place…:grin:

Multicore vs. clock speed; optimised software; particular demands of real time audio processing; to hyper thread or not to hyper thread; quality/compatibility of your audio interface software driver; performance variation across the range of buffer sizes; the list goes on and on.

PassMark and other metrics are an important indicator of cpu grunt, but this needs to be put in the context of other myriad hardware and software variables. If you’ve read benchmark tests that seem to show a 25% to 30% increase in ability to run VSTs, at the same buffer size using the same audio interface in very similarly specced machines, then I would consider this a more useful real world indication of what you can expect.

Steve.

I more or less agree with plectrumboy on that one. The ability of a given system to handle a project with several FX/VSTis is sadly not reducible to a linear relation with any CPU benchmark. Everything that composes a DAW comes also into account : the interface driver, the HDD/SDD, the motherboard components, Windows settings (and I think of lan/WiFi, here…), etc.

As an example, I just changed my system from an i7-870/HDD/DDR3 based to the one in my signature and yes, the overall ability of the new one to manage a rather heavy template of mine is very close to be 100 % better than the previous one, but this, as the CPU benchmark value of the 3700X (23839) is something like 445 % the i7-870 one (5334 - just checked it, out of curiosity). And I’ve been using the latter during nearly ten years : countless generations of components have appeared and disappeared during this time, and I’m not only talking about CPUs…

I went from an overclocked Intel 4790K @ 4.6 GHz to AMD Ryzen 3950x and gained about 75%, ie a project that was using about 98% CPU is now down to about 25%. The new Threadrippers, like 3970x, will probably go beyond that by a pretty large margain.
If you got the budget take a closer look at the 3990x with its 64 cores. :slight_smile:

First you need to understand that a single application thread can only run on a single logical core. Then, you need to understand where a DAW is able to split work among threads and where it is not. If you have a VSTi in an instrument channel, and you have VST effects running in insert slots on that channel, that all has to be in a single thread. You can’t start applying delay to a synthesizer before that sound has been generated, it must be done in serial. You can have a second channel with another VSTi that feeds a different insert chain and have that running in a different thread, as this can be done in parallel. The output of all these channel threads then gets fed to the master output thread to be mixed and fed through the master insert chain.

Where things get weird is when you start running send effects, group channels, and sidechain feeds. These can start to create dependencies and limit your parallelism. Any time you have one channel depending on the output of another, there might be things that need to become serial. Otherwise, Cubase does a very good job keeping all your CPU cores busy, but you want to be careful not to sacrifice clock speed for cores because once you hit the limit of what one thread can do, the whole project tends to fall apart from there and you will need to start bouncing channels to audio.

This is why DAW performance doesn’t scale directly with a benchmark score. You can’t break this workload down into nice and neat equal-size chunks of work and balance them perfectly across every CPU core like you could with say a 3D render or a file compression or a code compile. You might have a heavy synth on 1 track with a heavy reverb and delay effect while you have a simple drum sampler feeding 8 channels with a light drum verb group. This will load one core a lot while another is sleeping. Thankfully, another light channel will stack with it on this same core to balance as well as it can.

TLDR: Amount of plugins in one channel = clock speed. Amount of simultaneous channels running real-time = core count.

Ugh. Not what I wanted to hear but I understand what you’re all saying.

Whatever happened to DAWbench?

I wouldn’t think it would be necessarily so that “If you have a VSTi in an instrument channel, and you have VST effects running in insert slots on that channel, that all has to be in a single thread.” I would like to understand how it is known, is that from experience and observations of the CPU performance levels. I would like to read up on that if there is a source for the information provided – please excuse my ignorance and curiosity, Thanks

To my knowledge, VSTs are not constrained to a single thread. I believe omniphonix is referring to the general necessity for serial processing BETWEEN the initial VSTi and each subsequent VST insert effect, being that none may proceed without first receiving the fully-processed signal of the former.

I can’t point to a single source, I have researched how this is accomplished for years since multi-core CPUs became commonplace. Here is the jist of it though. You can’t process audio that doesn’t exist yet. If you have a VSTi synthesizer feeding a VST delay effect, how would you apply delay to signal that has yet to be generated? What you can do is have 2 different VSTi synthesizers feeding 2 different delay effects. Then, if one channel is taking longer than the other, you delay the faster one so the audio all hits the master output chain at the same time. This is where plugin delay compensation comes into play. It’s just expanded to control timing of multiple audio threads instead of a single thread.

Now with features like ASIO guard, a channel that is not record-enabled (live input) can render ahead and hold the output in a buffer until playback time. This results in even smoother playback since the audio doesn’t need to be rendered “just in time” for every channel in the project.

At the moment the ryzen 3950x is most bang for the buck regarding cpu. I have not used it but people seems to have good daw expericese with it. So far the only draw back that I can see it that there is very hard to find motherboards with pcie4 and no fans. pci4 is way to double the NVME performance. This cpu’s are also a step up regarding it’s pro usage since they can be equipped with ECC memory (as an option). To bad that apple did not put this cpus in the new Mac Pro.

Here is a pretty detailed comparison of 2 CPUs bang for the buck at - https://cpu.userbenchmark.com/Compare/Intel-Core-i9-9900K-vs-AMD-Ryzen-9-3950X/4028vs4057

Multithreaded processing…aaahhh… While obvious that audio that does not yet exist cannot be processed till existence. It could (really should) be that multiple threads be available / spawned to get it done as it come into existence and even when it is up stream, that initiate its creation. My thought post previous was just a spark of interest there might be some more than anecdotal info regards.

It does perc my attention a little when what sounds like re-tellings, or assumptions, or outdated info that does not seem to necessarily be how things should happen. In the beginning of multi-threading there was a lag till it was implemented, lets say better (to be kind). Think it catches my attention because in previous software engineer life, I was responsible to demonstrate multi-threaded processing functionally, to prove multi-threaded processing worked validly for multiple parts of the North-American emergency communication infrastructure, a once fairly technical background, also why to suffer the brain damage to think about it much, would pretty much want to receive a big fat check. Some of those associates from those days are gone now. VERY early, onset of, stress related diseases, pretty obvious. Also, will try very hard not to respond too much again when cases should come up, that I don’t have knowledge about, don’t want to research. Tho if there might be some informative summary of information, I still would like to give it a light leisure looky to see if it will help make some audio noise in Cubase. Interest in going down the multi thread rabbit tunnels has long since waned.

I don’t really want to know too much about how Cubase muli-threads, just want to use the heck out of it. Things like numbers & types of; vsts tracks per cpu model would be good. Hopefully Steinberg is doing their due diligence, there is a bunch of people entrusting them to. And for a not so small fee I might be coerced to prove it – not likely be, eh? :slight_smile:

One note as a VERY strong indication that users really don’t need to think much about the multi-threads is a quick look at the windows task manager, and see several hundreds of threads burning their way through your audio for Cubase alone.


Lastly one quick tip for win users – and if it applies to your system it’s a really important tip - is to have a look in window resource manager and make sure your CPU cores ARE ALL getting loaded – if some ARE NOT – do a search on “unparking CPU cores” that is one sure way to kill your systems and certainly Cubase’s power.

So the short version: is Cubase Pro efficiently multi-threaded or is it better to turn of hyperthreading in the BIOS?

Yes

or is it better to turn of hyperthreading in the BIOS?

That’s a different issue, and a complicated one to answer. Generally it is better to leave it on. But there can be circumstances where it’s better to turn it off.