Sorry if that’s been asked before, searching for didn’t turn up anything.
When I export a mix, I see that Cubase makes little use of the actual processing power at its disposal. I have an 8-core, 16 threads powerful CPU, and yet, at export time, it will make use of perhaps 25% of that max, as per the performance monitor in Windows 10. Is there a specific reason for it, and can it be “unlocked”, so to speak, so that Cubase will increase the power it uses and makes renders faster?
Well exporting uses lots of other resources besides CPU power. Some of them like memory access and disk I/O are very time dependent. When waiting for a time dependent task to finish the CPU might not have anything to do. Or one core may need to wait for another core to finish a task that it needs the results from. There will always be some resource(s) that will constrain how fast a program can run - and this can change from moment to moment. If you see low CPU usage it just means CPU capacity isn’t that bottleneck.
It’s a bit like having a car that is capable of going 120 miles/hour, but you never drive at that speed (even if you ignored the speed limit) because you have to navigate among the other vehicles, stoplights, road conditions etc.
I have no idea. All I can tell is that the SSD doesn’t get busy at export time (just the usual small blink once in a while as Cubase writes the fall as it goes along), perhaps it is the RAM but I’d be really surprised. For all real time operations it’s a breeze.
The point is the post was mostly to check if people have the same experience and, if not, what they might be setting differently.
There is always a bottleneck somewhere, and generally what is causing it & what it is, is constantly changing. Doesn’t matter how beefy your computer is, something will be limiting its response time at every single moment. If it didn’t then the results would occur instantly.
When you see that a CPU core is using 25% of its capacity, that is a simplification of what is actually occurring. That 25% is really an average over a very short period of time. At any moment that core is either working at 100% or 0% and nothing in between. And the reason that it’s at 0% is because it is waiting to have some work to do. It can be waiting for a huge number of reasons. For example, even with an SSD a write takes longer than a read, and both are significantly slower than the several CPU cycles it takes to execute a command. Or one core might have to wait for another core to finish a calculation before it can continue. Another aspect is that some things like calculating reverb are more complex than other tasks like adjusting the level after a fader. Yet both those calculation results need to eventually be aligned to the same time. Or the actual computation required might not be able to optimally utilize the CPU capacity you have. Say you have 16 cores, but the computation can only get split into 4 parallel threads - even if each tread runs at 100% your overall CPU use would be 25%.
Peakae is right, there are things you can tweak that have potential to more efficiently load your CPU (which is another way of saying “minimize bottlenecks”). But once your computer is perfectly configured and tuned you likely will still see unused CPU capacity. That just means what you are doing needs X amount of CPU and your computer has more than X capacity.
I thought of another way to describe the various timescales at play. This isn’t exactly what’s happening. But as engineering folks like to say - it’s a good first order approximation.
What really makes our computers work involves moving electrons around. If you look at the CPU it’s a couple of inches square. But the circuit designers put the parts that interact the most with each other near each other. So a lot of the work done by the CPU involves moving electrons just tiny tiny fractions of an inch, really really small. Now compare that to how far electrons need to travel to reach that SSD. There’s maybe 2-3 feet (~2/3 of a meter for the non-Luddite world outside the US) of wire. That’s many thousands of times further than what’s happening inside the chip, and consequently takes thousands of times longer. And that’s ignoring having to stop and chat with the motherboard, disk controller, etc. along the way.
At human scale these travel times seem the same - instantaneous. But from the CPU’s point of view sending out to the SSD is like ordering pizza delivery at 10pm new years eve. Now the designers do lots of clever things to mitigate the impact of these differences in scale (hey they ordered pizza, lets send 'em a few more 'cause ya know they’re gonna call for more). But underneath it all, different components of our computers run at different speeds and often it is the CPU left waiting.
agree with what people are saying but just to add that audio processing by its nature is quite linear i.e. you need the result of one process before you can apply the next one, it is a stream of data that needs processing serially. Computers are very fast at doing large parallel tasks i.e. apply a calculation to a large amount of data, where each result is not dependant on a previous one
In theory, one could chop up a project into many small chunks of time and process each chunk independently over many cores and whatnot and reassemble into a single wave/aif, BUT that would lead to all sorts of problems with things like reverbs, refactions, compression delays, etc. Really, it needs to be tapered down a single master thread/stream at some point.
Formula 100% * (Time required to calculate the buffer) / (Time required to play the buffer).
A simple example: We apply a filter to this buffer starting from a 1 second buffer (with 44.1 kHz sample rate is the 44100 sample frames). Assuming this filter is finished in half a second, then a CPU utilization for this calculation of 100% * 0.5 seconds / 1 second = 50% results. In other words, the other half of the second, the filter remains idle, and the performance is available to other processes, for example.
translated from this german article abouth audio processes on multiprocessor computers
Any chain of plugins on the same track can only use a single thread, simply because the next plugin needs to wait for prior plugin to do it’s thing.
It has nothing to do with Cubase optimization, and for that matter Cubase is a ton more CPU efficient than Studio One has ever been.
For example, someone running only 2 tracks into their master bus, can ever only make use of 3 threads. This goes for all DAWs.