Average Load to CPU Utilization Ratio

After considerable tweaking I’ve managed to consistently obtain an Average Load (Audio Performance window) to CPU Utilization (Win10 Task Mgr) ratio of ~2 in projects with lots of plugins (both VIs and FX).

So that’s: Average Load / CPU Utilization = ~90-95% /~40-45% = ~2.

To be clear I’m talking about Average Load (top bar in Audio Performance window) not Real-time Peak (the bar immediately below Average Load). For me, Real-time Peak is consistently low; <10%. And the bar labled Disk shows no activity at all.

Thus it appears that I have fairly good real-time performance (confirmed by LatencyMon), but am not able to utilize all of my available CPU power. Does anyone know exactly what is measured and displayed on the Average Load meter bar and why there is such a large disparity between it and CPU Utilization?

Also, has anyone managed to obtain a lower Average Load to CPU Utilization ratio in plugin-heavy projects? By that I mean have you been able to increaase CPU Utilization while keeping Average Load the same? If so would you please explain how you did that?

I should mention that I’m aware of the commonly discussed things such as tweaking BIOS settings, turning off unnecessary features (e.g. wireless), running LatencyMon, using large buffer sizes, enabling ASIO Guard, etc.

Here are my specs:
i7 8750H/16 GB Laptop
Windows 10 Pro ver 1909
RME Fireface UCX
Cubase Pro 10.5.20

The cpu load in cubase and the cpu utilization in the task manager are only loosely related. The latter is mostly an indication of load balancing. And load balancing is mostly about the signal routing in the particular project you’re using at the time. Also: to make any sense of the meter under most conditions, you’ll need to turn off asio guard, but for optimal performance, you’ll want that turned on.

Did you mean to say “The former is mostly an indication of load balancing?”

Interesting. Would you please explain how turning off ASIO Guard will improve the usefulness of the Average Load meter?

Thank you.

Latter. The ability to utilize your available cpu resource is mostly about the ability to balance the load among the cores. And that is mostly about the specific signal routing in the project you’re using at the moment.

I’m not sure how to answer your question about asio guard. The time slice utilization metric is ill-defined when asio guard is on.

Okay, I’ll disable it and see what happens. Thanks again…

I havn’t seen a CPU-limited project in 6 or 8 years (other than non-musical benchmarks). Modern DAWs are not CPU limited, they’re limited by real-time performance (which, as noted above, is only loosely related to CPU power these days). That’s why the realtime load meter is peaking out while CPU usage is relatively low.

So I wouldn’t worry too much about the CPU meter when it comes to DAW use. It doesn’t really have any bearing on anything these days. Latency is much more dependent on other system elements, like video card and network card. CPU isn’t really a factor for latency these days.

I did some tests a few months ago where I looked at how CPU usage relates to real-time performance via buffer size / latency measurements. I found that there is no benefit beyond 6-8 cores. And compared to only 4 cores the benefits are pretty small, small enough to be insignificant.

Of course 14 cores run at lower CPU usage. But 6 cores run the same projects just fine at the same latency. Even four cores were perfectly fine with a small bump in buffer size.

So, yeah - don’t worry about CPU usage unless you’re pegging it out. But as I said, I haven’t seen that in a long time.


Thanks for replying…

This was my motivation for posting; I’m constantly sitting at 90-95% Average Load in most projects now. This means I’m frequently experiencing dropouts. In my case it’s particularly painful b/c it’s not just a brief dropout and then right back to work. Instead the Average Load meter ‘convulses’ between 0 and 100% for quite a long time (as much as 30 seconds). And this can happen after playing just a few seconds of a project. I’ve found that disabling (alt-clicking) a few of the CPU-hungry plugins can break the convulsive cycling. Nonetheless, I’m eager to find a way to tap into what appears to be unused CPU power to give me a little bit of headroom to avoid the scenario described above.

Actually that’s what I thought too, but as I mentioned in my initial post, my Realtime Performance is actually consistently quite low (<10%). It’s the Average Load that’s consistently (and precariously) quite high. This is confirmed by 1) the favorable results reported by LatencyMon as well as, 2) the somewhat surprisingly weak correlation between buffer size and the problems I’m describing. Even with it set at the maximum value (either 1024 or 2048), my performance does not improve. Together, all this has me thinking that it’s not so much realtime performance but rather something related to CPU power.

I neglected to mention in my original post that the balancing across cores (6 physical, 6 logical) is nearly perfect. Most of the time they’re within 10% of each other; no single core dominates or is left idle. This too seems to point to CPU power. Perhaps it’s a reduction in clock frequency due to thermal throttling (especially given that I’m using a laptop workstation; HP ZBook Studio x360 1030 G5)? In a heavy project (many VIs and FX), core temps can reach the upper 80s (even with external fan cooling).

I was afraid that this would be the case, but then I ran into this: https://vi-control.net/community/threads/threadripper-3970x-build-notes-and-cubase-benchmarks.94892/

Scroll down to where it says:

Switch off hyperthreading: Out of the box, with hyperthreading on, Cubase’s ASIO-guard does not function properly. Turning off ASIO-guard improves performance. However, once hyperthreading is disabled in the BIOS, ASIO-guard can be turned back on and performance increases to a totally different ballgame allowing to reach CPU saturation of almost 100% without drop-outs (all 32 cores are fully loaded).

This gives me some hope that I may not be searching in vein.


Just because the load integrated over time is evenly spread across the cores, that doesn’t mean the rendering threads are balanced. In fact, since you’re not getting 100% utilization, they almost certainly are not.

It couldn’t hurt to try turning off hyperthreading, but a configuration with large numbers of cores like the one you linked to may not be relevant for you.

OK maybe I’m not following completely. The two main meters in Cubase are both real-time performance meters, one average (long-time) and one peak (short-time). The “Average Load” meter is still a real-time performance meter. There’s some weirdness with ASIO guard on because you can get longer-timescale averages higher than the peak (which is not mathematically possible) but it doesn’t matter: you’re still hitting real-time bottlenecks before CPU bottlenecks.

That’s consistent with my experience for the last number of years: a practical project just can’t make use the full power of the CPU because of real-time limitations. You have to build some crazy benchmark projects with 500 compressors or something like that to make use of all the CPU power we have these days.

So if you’re trying to tap in to that extra CPU power to get latency down I think you’re going to have a hard time. That’s been my experience, anyway.

Also FYI I’ve not seen any correlation between LatencyMon values and latency within Cubase so long as there are no issues reported by LatencyMon. In other words, if one system shows LatencyMon values of 100 us and another shows 5 us, they both perform the same in terms of latency in Cubase (in my experience). If one shows 10,000 us then that one has a problem, in general. So I’ve found LatencyMon useful to confirm major issues but I’ve not found it useful for optimizing performance.

Also also FYI I gave up on ASIO guard a while back. I try it every now and then but always end up turning it off because I get weird behavior with it. Sometimes it helps a tiny bit, sometimes it makes things a lot worse. But it’s always weird. Maybe you’re experiencing some of that.


Yes, that’s a good idea. While I’m at it I’m going to see how my performance varies with and without ASIO Guard, and how that interacts with hyperthreading.

I understand (and agree with) what you’re saying about short-time (peak) vs long-time (average) performance in theory.

But what’s confusing to me is that it doesn’t seem like the relatively large numerical value of the long-time (average) performance could possibly result from an average over any relevant timescale of the extremely small short-time (peak) performance (I think you expressed the same thought, below).

Also confusing is why long-time performance hardly changes when I press Play. This makes me think that long-time performance is really indexing something like CPU load. And this is consistent with the fact that long-time performance correlates strongly with the number of VI/FX plugins that I add to the project; the more I add, the more the long-time performance meter increases. This relationship is mirrored in CPU load (as measured by the TaskMgr). All these observations seem to suggest that the short- and long-time meters are actually indexing different things.

Perhaps I’m thinking about real-time performance – as it relates to audio processing – the wrong way. To me it’s a property resulting from the ability of the operating system to handle requests by Cubase to deliver data to the audio (hardware) buffers while competing with other requests such as interrupt service routines (ISRs). If, for example, an ISR takes too much time to execute, it will preempt the audio buffer request and an audio drop-out will occur. If true, this seems to be a ‘background’ property of the system that varies based on system hardware, peripherals, OS services, etc.

The only way I can imagine long-time performance relating to realtime performance (as described above) is if all the plugins I’m running are taxing the CPUs to such an extent that they’re adding to the ‘background’ property of the system (resulting in poorer real-time performance). But if this is true, then the performance meters are really showing a composite of CPU load and ‘background’ activity.

I really wish Steinberg would chime in here and demystify this subject. They obviously know what the performance meters measure.

The performance meter is of limited use when asio guard is turned on, especially the peak value. That metric just doesn’t make any sense in the case you’re asking about, especially at lower levels.

In any case, when asio guard is off, the peak meter is a, well, peak meter :slight_smile:. So you shouldn’t expect a direct correlation between that and the average meter.

Many plugins do not change their cpu load when playing vs. not playing.

I agree with rgames: 100% utilization is something you only achieve in contrived benchmark cases.

Okay, here are the results from changing hyperthreading and ASIO Guard (values are approx):

See below for screenshots of the project I used.

Hyperthreading | ASIO Guard | Average Load | Real-time Peak
… ON … OFF …100 … 100
… ON … LOW … 95 … 5
… ON … NORM … 85 … 5
… ON … HIGH … 75 … 5

… OFF … OFF … 100 … 100
… OFF … LOW … 85 … 5
… OFF … NORM … 75 … 5
… OFF … HIGH … 67 … 5

Thanks to GlennO and rgames for their valuable input!


Amen to the first part - what, exactly, the meters measure has been a topic of discussion for years. Maybe more than a decade. There are all sorts of examples that just don’t make sense. More for the average meter than the peak meter.

But on the second part - I’m not so sure they know what’s going on! A while back someone pointed to the manual where it says that the “Average Load” is the CPU load. But clearly that’s not the case, as you and I and countless others have demonstrated. There is, at best, a weak correlation between Average Load and CPU load, particularly when ASIO guard is activated. So the people who wrote the manual are either just plain wrong or they’re making some unstated assumption that makes their statement true. But I can’t think of what that would be.

About the best way I’ve found to explain the two meters is that the average meter is related to how hard Cubase is working. The peak meter is much more influenced by how hard the hardware (via interrupts) is working. The Cubase software still plays in to the peak meter, but it depends more heavily on the system.

Here’s where I wound up as of a few years ago: I don’t really pay attention to the meters any longer. If I hear crackles and dropouts, that’s the only indicator I really care about. 'I’ve never seen a setup that doesn’t show some red indicators on the meter over the course of a day. But if you don’t hear them, who cares?

So yes, the meters are a mystery. But in the end it doesn’t really matter.

Bottom line is that you’ll hear crackles and dropouts long before you hit max CPU usage on any decently spec’d machine from the last five years or so.


It’s not really that mysterious. The peak meter is the highest recent percentage of the available time slice that has been used. The average meter is an integration over time of the time slice usage. Colloquially, both are called “cpu load”. Because one is an integration and the other is a peak, there is no direct correlation between the two. One tells you the worst case load, the other tells you the average load. One tells you if you’ve had any dropouts. The other tells you generally how much you are taxing the cpu. Both pieces of information are useful, so 2 meters are called for. Neither is a measure of total cpu utilization, which you would find on your system cpu meter.

These metrics only make sense when asio guard is off, of course, especially the peak.

This seems quite reasonable and intuitive, so I’ll accept it as the truth until Steinberg tells me otherwise.

What the average meter actually reflects is another story. Initially I too thought it was the average of the peak. However as rgames pointed out, this cannot be true given that often the average meter reads much higher than the peak meter.

So as an example, when I run the project I used to obtain the data above (which is a typical project), the average meter fluctuates between 65 and 95 for the duration of the song and the peak meter fluctuates between 5 and 10. Mathematically you cannot get, ~80 (the midpoint between 65 and 95) by averaging numbers between 5 and 10. So it is mathematically impossible that “the average meter is an integration over time of the time slice usage”.

And if you’re not convinced by the above example consider that immediately upon opening the project (and before running it) the average meter already reads ~70 while the peak meter reads ~5. If what you’re saying is true, the average meter should read zero, or ~5 at most.


The peak meter is meaningless in almost all cases when asio guard is on. It only makes sense when asio guard is set to “off”. When it is set to low, medium, or high, it almost always meaningless. It is only meaningful when asio guard is set to “off”. By “off”, I mean not low, not medium, not high.

No, the average meter is not an average of the peak values. That’s not what I said at all. That would make no sense. The average meter is an integration of the time slice usage over time, which is a very different thing. No, you cannot draw conclusions about what the average meter should be showing by examining the peak meter.

I apologize if I’m not doing a good job of explaining this, but there is nothing mysterious about the meters, and there is nothing unusual about the values you reported.

I don’t understand why - can you explain? If it’s trying to catch short-time transient peaks, why wouldn’t that be a valid measurement with ASIO guard on?

Also, are you implying that the average meter is valid with ASIO guard on? If so then that really doesn’t make sense because the average load over longer time scales (100 ms or so?) really can’t change whether ASIO guard is on or off, and I see pretty big jumps in average load when I switch ASIO guard on. If the program is playing back music, the average rate of “music calculations” must be constant because the sample rate is fixed. So there may be larger or smaller delays when playback starts but once it gets going that average can’t be much different if ASIO guard is on or off; that would mean that the computer is doing vastly different amounts of calculations, and it’s not because it’s the same music that’s playing back. The total amount of processing that the audio system is doing is the same whether ASIO guard is on or off, it just adds latency where it doesn’t matter, so rises in the average meter when turning on ASIO guard don’t make sense if the average meter does measure longer-time-scale calcs.

That’s one of those “mysteries” that pop up regularly in these discussions.

And to make it even more confusing, here’s the Cubase manual that says the average load meter is a CPU meter:


I wonder how many more years we’ll be discussing these weird Cubase performance meters… I bet some group of people at Steinberg are laughing like crazy right now: “Haha! They’re still falling for it!”



Because the maximum amount of work being done per callback is no longer an indication of the amount of work that must be done.

the average load over longer time scales (100 ms or so?) really can’t change whether ASIO guard is on or off

Maybe this is the source of your confusion. Nothing could be further from the truth. That’s the whole point of increasing buffer sizes: To reduce the total amount of work that must be done via amortization across fewer callbacks.

And to make it even more confusing, here’s the Cubase manual that says the average load meter is a CPU meter:

Yes, as I mentioned above, that’s a common way to refer to it. I’m sorry, I’m not following what’s confusing about that?

That’s not correct - buffers are related to when calculations are done, not the total number of calculations.

Let’s say Cubase needs to do 60,000 calculations for a 60 second track (it’s a lot more than that, of course, but let’s just say it’s 60,000 - scale it however you like, the point remains the same). The buffers are just the chunks that the audio system uses to transfer audio to/from the audio card. The total number of calculations is not dependent on the buffer size. Your 60 second track needs 60,000 calculations regardless of how you break it up.

If I owe you $60,000 for something, I can pay you in twenty dollar bills or I can pay you in 100 dollar bills. But the total will still be $60,000. How you break it up doesn’t affect the total. Likewise with buffers.

If you turn on ASIO guard and the average meter stays 4x higher over the entire length of the track then you’ve done 4x as many calculations to play back the same track (assuming the average meter is related to Cubase processing load). So your track that previously required 60,000 calculations now requires 240,000 calculations with ASIO guard on? Clearly that’s not what’s happening - that’s hugely inefficient! Plus, you can just look at the CPU meters to see that you’re not doing 4x as many calculations. Hence the discrepancy between CPU meters and the Cubase meters.

Buffering is related to the latency, not the total number of calculations required. The calcs use a bigger buffer and start later for “unarmed” tracks with ASIO guard on but you’re not doing more calculations.

And so I say again: alas, there’s nobody who really understands what those meters mean…!


Oh my. That’s not true at all. Due to overhead of processing tasks, the buffer size has a significant impact on the amount of computational work that needs to be done to process a given number of samples. My apologies if you were hoping this was complicated and I’ve spoiled things by revealing it’s actually quite simple :slight_smile:.