the Importance of multiple cores?

I did read the literature here on the Steinberg website about PCs and compatibility with Cubase, but I am wondering how important it is to have multiple cores when running Cubase 8 (regardless of whether it is Pro, Artist, or Elements).
Also. can individual cores be assigned to be exclusive to audio recording, plug-in management, etc.

Furthermore I was wondering when a partition is necessary?

π.

More cores is always better, but a higher clockspeed with fewer cores is usually better than many slower cores.
So pick your tradeoffs wisely. A lightning fast dual core or quad core can in practice outperform a very expensive octacore when it comes to low-latency tasks like realtime audio processing.
Cubase automatically assigns different processes to different cores, as far as I know there is no way to do this manually. But if you can, I doubt you’d be able to do this better than what Cubase does.

When is a partition necessary:
I personally only have single partitions on each drive. However, if you’re running Cubase on a computer that’s also used for other things, you could decide to split a harddrive into 2 partitions, and installing your OS on both. That allows you to boot to your ‘regular’ computer which has all your programs etc available, and when you need it you can boot to your 2nd partition where you installed just cubase. That way you have a much cleaner environment which can improve your performance. How much of a difference that makes depends entirely on how much junk you have on your computer :wink:

I am assuming you are still putting the OS, projects and samples on separate drives, as not doing that will likely lead to some hiccups, and unworkable if there is too much going on.

With SSDs, because they do not have heads like HDDs, which need time to travel across a disc platter, there are no time penalties as to where data is on the drive, so partitions for speed optimisation are unnecessary.

With HDDs, if you want to minimise head travel time, and thus enable lower latencies, you are better off getting a larger drive, but creating a partition just large enough (with extra) to handle the critical data. Larger drives will have more heads, so there will be less head movement for a given amount of data.

However, if you do partition a HDD, do NOT use partitions on the same drive for the same purpose, as that will force longer head travel, and lead to larger latencies. That is, use the first partition for samples, but other(s) for anything but samples or projects, but instead for general data, backups of projects, or whatever.


I have two OS drives, booting from one for general usage, and the other for the DAW. I installed the OS on each while it was the only drive in the computer, so I do not use the boot menu, as I had issues of corrupted boot menus when I had to re-install an OS. The advantage of separate OS drives is that your computer is still usable if one OS drive goes AWOL.

I have all my SSDs in a 5.25" 6-drive bay, so when I want to boot to the other OS, I power down, open the current OS drive bay door, then restart the computer, which will boot from the other, and if needed, I close the other drive door. This is only necessary when changing boots, as the computer always boots from the last booted drive, unless it is offline, so it then looks for others.

I set both boots to use a common general data drive by setting each OS to use the same folders on it for Documents, Music, Pictures, Videos, Favourites and desktops.

Re multiple cores, roughly compare the effective power of different CPUs by multiplying the speed by the number of cores. Therefore a 4GHx 6-core CPU gives 24, whereas a 2.5GHz 8-core gives 20, which makes the 6-core better.

DAWs, because there is so much interaction going on, don’t always work just because such simple calculations might indicate that. DAWS are far more dependent upon the minimum guaranteed performance per millisecond, than the maximum per second. You have to cater for the worst case, as that is when glitches can ruin a take. This usually means you need to run a system short of its peak performance, so that it has some slack that can be used for instantaneous peaks.

In this regard, up until Xeons started allowing overclocking, consumer CPUs were the best, because, while you might run a stock speed for most projects, being able to call upon extra CPU horses via overclocking for taxing projects could save having to do a lot of freezing, or an emergency upgrade. However, Xeons don’t tend to allow much of an overclocking improvement, as it is still limited by having lots of cores generating heat in a small package.

Generally though, consumer CPUs give better performance per unit cost.