Throughout a 600+ track Atmos project, I had no problems with the processing. Asio-gard (medium) was going at 80% of its capacity. Then all of a sudden, it stops working. Asio-Gard is at the end of its resources and there are gaps, or the sound is jerky. Yet my Mac Audio M2 ultra is only at 17% capacity. So how do I configure Nuendo? Why doesn’t Nuendo use all the Mac’s capabilities?
Asio-gard is now set to hight.
I’ve rendered many tracks and reduced the number of plugins.
Offline processing is permanent, as are extensions.
What could be better?
Mac Audio M2 ultra Thunderbolt
128 Mg RAM
Sonoma 14.4.1
Apollo x 16
Nuendo 13.
I answer myself (for the benefit of all - except those who know).
I rummage around and discover in the options the function that consists of disabling VST3 plug-ins when there’s no sound signal on a track. That helps a lot.
But the question remains: why doesn’t Nuendo use all the computer’s resources? Could it be the UAD interface? Perhaps.
Usually the OS will measure total CPU resources available (I think it’s the same on OSX) whereas Nuendo’s performance meter will tell you what the load is on the capacity of audio performance. If there are processes that have to be calculated in series then by definition one has to end before the next one can start so Nuendo does that in a way that requires a certain amount of CPU resources. That can be less than 100% of available resources.
Thank you Matthias for your technical expertise (as so often). I’m trying to understand. There’s an expense ratio, let’s say, between Nuendo and the system, and you’re telling me that the audio dimension is only one part of it, which explains why it can be used completely without using all the computer’s resources. That’s fine. But who decides on this balance, on each person’s part. My Mac is only used here for music. No other software, except mail and browser. Why should Nuendo’s (or audio’s) share be so low, at 17%? Perhaps the UAD interface is the cause.
That said, I’d heard that the M2 was ultimately weaker than the M1, due to different core usage. Maybe that’s it? But there’s something missing from the equation. I can’t compare with Pro Tools, because I don’t have that kind of session in PT at the moment, and anyway I’ve always had the impression (and sometimes proof) that Cubendo is stronger on pure memory management, but what are Atmos studios doing with a project like mine? There must be something wrong. Something must be wrong.
Let’s say that you have one channel only in your project, and in that channel you have 10 plugins in series in inserts. Plugin #2 cannot process the signal before #1 has finished processing and handed over the result to #2. #3 cannot start before #2 is finished, and so on. So you end up with one long chain.
Then let’s say you have a CPU with one core. It takes this core exactly 1 second to finish computing, and when it is computing the CPU is used 100%. Now add one core. Does that help? If core A has plugin #1 core B still has nothing to work on until core A finishes with #1. But then you can still use core A for #2 because there is nothing else to work on since everything is in series, one thing after another. So if this happens then your CPU is using one core 100% and the other core 0%. In total your CPU usage drops to 50% but it’s working as fast as it can.
That is obviously a really, really rough example and it is more complicated than that. But it’s an example of the work being done by software not always being able to use all of a CPU 100%.
Also remember that audio is a “realtime” job that can’t glitch. So we have buffers (both hardware and ASIO) and they help. But eventually you run out of speed on a CPU if you have too much processing that has to finish within one second. The buffer empties and you get dropouts. So the performance meter takes that into account. The meter in the OS doesn’t care.
I think other people can explain this a lot better than I can, but just know that if you’re seeing different values it isn’t unusual at all.
PS: You can of course play around with settings a bunch and maybe find a better setup. Buffers and ASIO guard and drivers etc.
Your explanation is very clear, Matthias, and I now have a better understanding of audio processing via the DAW compared with the entire resources of the computer. Thank you for that.
As for the buffer, Atmos forces us to 512. So, no playing possible there. As for the Asio-Gard, I can set it to maximum and create more latency (not important in mixing), but since I checked the option to disable VST3s that don’t receive a signal, I’ve gained 20% at least. So it’s all right, and I can go back to the Asio-Gard midrange (I like midrange in everything, even my steak ;-)).
My only question, and I’ll get the answer in the next few days as I work, is: will disabling the VST3s without signal interfere with my workflow? Will the suddenly activated VST3 wake up so quickly that it has no effect?