WaveLab 11.1 does not use all resources available on Apple Silicon

I have a Mac mini M1 with 16 GB of RAM running the latest MacOS12.4. Just installed the latest version of WaveLab Pro 11.1 which “supports” Apple Silicon. Additionally, I use some Plug-Ins from Waves (Restoration Bundle) and iZotope (RX9), all latest and greatest version, all as VST3. So far, so good - everything works.

However, when I try to use this combo to perform some tasks, my issues start to become obvious: The maximum amount of CPU usage that I will get is 140-180%. Even a background render just utilizes 138% of the CPU. In comparison, an optimized app (lets take a photo editor like DxO PhotoLab or the iZotope Audio Editor) utilize this machine with up to 750%. So, there is roughly just one fourth of the power of the machine used.

This leads to Audio Drop outs if I just use one (1) of iZotopes Plug-Ins, not to speak about a chain of Plug-Ins. In essence, this is a worthless approach.

So, please be aware that as of now, WaveLab Pro 11.1 appears to be only capable of handling “light” Plug-In Workload!

Real-time audio processing requires sequential processing (one process after the other).
The M1 is not a magic solution, yet some M1 users have good results, see:

I understand that audio processing is a sequential task in general. However, Wavelab appears to spread the workload „somewhat“ amongst the 4 performance cores that my M1 has, even today. During the audio clipping (when audio processor bar is at full extend) there is still approximately 20% unused resource left on the one core of M1 which is most used. Other cores have up to 80% unused resources left. I would understand your comment if Wavelab would just utilize one performance core to the maximum. However, this is not the case.

Additionally, when rendering a file, the same performance limitation apply as well. This is is contrast to the way, iZotope‘s Audio Editor Application performers: There, all cores are maxed out from start to finish. And the task that both applications perform is essentially the same: offline rendering of one file using the very same plug-ins.

The comment you have posted from the fellow user, cannot be confirmed by my experience. So, there might be something wrong/different in the two software setups, since all M1 processors are the same, just some of them have more performance cores. And, as I have said, the performance cores in my machine are only used approximately 25%.

Or, is there another factor that limits the audio processing?
I store all files and applications on the internal SSDs, and there is still plenty of space left on this SSD. I cannot see what else would hamper the performance?


This screenshot shows the CPU usage while playing the file. It was clipping. None of the cores was utilized to the maximum.


This screenshot shows the CPU usage while rendering the file.


This screenshot shows the CPU load while rendering the same file in RX9 Audio Editor.

To make valid comparisons, use the same plugins (and not from Izotope) on the same audio file.

I’m sorry but I don’t understand what you are saying. Above three screenshots are from the very same file, being processed in there different ways:
a) played back in Wavelab
b) rendered in Wavelab
c) rendered in iZotope

The plug-ins used are always the same two - RX9 De-click and RX9 De-crackle (in Wavelab as well as Audio Editor, I have used just these two effects). These are the plug-ins that I would like to use on the file because it is a Vinyl recording with some cracks that I want to remove. Since Wavelab itself doesn’t provide me with these crack-removal features, I don’t see any other way than using them?!?

If you wish, I can use a bunch of “random” Steinberg-provided plug-ins and show CPU usage with them. This way, we can make sure the issue is not caused by the plug-ins (or their VST3 implementation).

The Izotope App does not use the Izotope VST-3 plugins but its own version of the processes.
This is why you can’t compare “rendered in Wavelab” and “rendered in iZotope” using Izotope plugins.

Hence if you want to compare WaveLab and Izotope apps rendering times, use other plugins than Izotope.

And if you want to compare Izotope plugins alone, compare WaveLab and another DAW, but not the Izotope app. And compare rendering times.

Thank you for this clarification!

However, the point that I want to make is not to compare WaveLab to iZotope, but rather the poor usage of system resources by WaveLab itself. My problem is that I cannot work in WaveLab with this many plug-ins (2 iZotope or 5 Steinberg plugins, see below). The clipping that appears when Wavelab reaches (whatever it considers) 100% system resources makes my work impossible. If Wavelab would use all resources available (for play-back especially) I would be able to perform my work.

Please find here screenshots from the very same file in Wavelab, with only following Steinberg-provided plug-ins loaded and used (all iZotope plug-ins were removed from effects chain):

  1. Leveler
  2. GEQ-10
  3. RestoreRig
  4. DeReverb
  5. Octaver

I might not have made clear my main concern: I don’t care much about rendering times but more about the dropped audio signal / clipping that appears when I play back the song. This makes listening to the song impossible.

My comparison to iZotope should just illustrate that this workload can be done on my machine by this other app - since it uses all resources available. And I might add that playback is done with around 220-250% CPU load. Since WaveLab doesn’t use more than 180% CPU load, 40-70% is about the CPU load that is lacking in Wavelab to play-back the file without clipping.

You might have 100 cores, this does not change the fact that a single core is used when processing plugins in a sequence (no parallel processing possible). And the CPU is consumed by the plugins, and very little by WaveLab.
The green bar displayed in WaveLab reflects the utilization of the core with the bottleneck (this is not an overview of all the cores).

In a plugin chain, it is common to see that a particular plugin is using most resources. Check which one.

If you have dropouts during playback, consider increasing the ASIO guard time to 100 milliseconds.

Thank you, I will try the tip to increase the ASIO timeout!

I understand the concept of all plug-ins being processed on a single core. However, I still don’t understand why the green bar in Wavelab shows nearly 100% utilization although even the most used core still has approx. 30-40% capacity left. This can be seen in the last screenshot.

The green bar of WaveLab is exact for the most used core. The graphs you display are general purpose and don’t measure the DSP impact of the most used core.

Ok, so thank you for you kind and very timely answers! They are very much appreciated!

I have tried out the ASIO guard time but there was no audible effect. Even in the highest setting (200ms) the same clipping appeared.

As you have pointed out, the issue will remain even on the most powerful M1 Ultra machines, since they are just 16 of the same cores as I have 4 of them in my M1. So, there would be just much more redundant performance cores which are not used.

This discussion has made clear to me that Wavelab Pro is not suited for my needs. I will get in touch with your collegues in sales and see whether I can get a refund since I cannot use it for my needs. I have never imagined that a tool as widely appreciated as Wavelab wouldn’t allow me to work with de-click and de-crackle of Vinyl recordings in real-time.

Again, thank you for your prompt and competent answers, Philippe!

Note that this is not a limitation of WaveLab, but of the resources. If you use another DAW with the same plugins, you will get the same bottleneck.

This is correct, of course!

Maybe I was indeed exception too much from Apple Silicon-native Applications. Last night, I have played around with other DAWs (which was never my intention since WaveLab’s functionality is pretty much perfect for my needs) and found all sorts of interesting things: There was one DAW which behaves even worse than WaveLab in terms of “making use of M1 CPU resources”, thus clipping even more than WaveLab. However, there are other DAWs which take a different approach/use other techniques and thus don’t have that clipping issue. This is, of course, linked to other undesired implications (such as that you can only play back with one plug-in at a time).

However, it might be possible for you and your colleagues to further investigate possibilities to better use limited resources of Apple Silicon processors (e.g. by targeting the plug-in workload to a core that is blocked from any other workload???) - just my hope!

Additionally, I think it is fair to comment that it is strange to see that WaveLab itself comes with two of such resource-intense Plug-Ins in the “Restoration” tab, RestoreRig and DeReverb. Each of them uses about 3/4 of the CPU resources (i.e. fills the green bar to 3/4). When activated together (or one with any other combination of Plug-Ins that need more than 1/4 of the green bar), this results in clipping sound.

Hi!

Do you render realtime or not?
What kind of Audio hardware and buffert settings!

regards S-EH

This could seem curious, but sometimes certain DSP algorithms are faster in Rosetta mode, where some Intel-only algorithms are possible to use.
You can try to see if that makes a difference for certain hungry plugins.

Hi S-EH!

I don’t render at the same time than I play back, no. I have two distinct use cases:
a) play back a file (in order to listen to the effect of the plug-in’s settings) - without any other task running in WaveLab and very few other applications
b) render a file (in order to create the final result in a new file)

I don’t have an issue with the fact that rendering a file takes x minutes longer or shorter, since this is a “one-time” thing which can happen in the background while I do something else. However, I have an issue with the fact that I cannot play back a file with said plug-in combinations in order to hear what is the effect of a certain chain of plug-ins.

My Audio hardware is Apogee Symphony with latest “Control 2” software and related firmware. My buffer sizes and ASIO Guard times are irrespective of this issue. They just make the clipping appear “a little bit” less often (with 2048 buffer size and 200ms ASIO Guard) but cannot prevent this from happening.

Regards,
Andreas

I have just tried this, and in Rosetta2-mode the “green bar” is only filled up 80-85% when activating both Steinberg plug-ins (and no other than just these two), instead of 100% plus clipping, when in Apple Silicon mode.

Additionally, I have tried the two iZotope plug-ins as well (again, without any other plug-ins active). And these are now also “ok” to use: they consume about 90-95% of the green bar but never 100% plus clipping which was their standard behavior in Apple Silicon-mode.

In summary, I would say that these plug-in combinations are definitely worse in Apple Silicon-mode than in Intel-mode. This is not by a huge margin, but in my configuration it makes the difference between usable and unusable. And, I might add that this experience is in contrast to any other application that I have seen changing from Rosetta2 to Apple Silicon-native mode. Other applications have a huge perforce boost from going to Apple Silicon-native - like the fellow user has reported. This is all very strange to me…