…wondering why my processor barely shows 35% of use while my nuendo-project stutters @ 100% dsp-load (with a macbook pro 2019, intel i9 with 8 cores, 32 gigs ram, radeon pro, catalina 10.15.7, NU 10.3, max. buffer-size, 64bits processing - going down to 32 doesn’t make any diff. -, asio-guard on max, ravenna…
Have you tried with 10.2?. Here back to 10.2 because 10.3 unusable in some tasks like waveform edit window. Very sluggish.
I’ve been trying to figure out this issue as well. Not sure if it’s only the 2019 MBP (same config) which is affected by this or all Macs on Catalina.
There are 2 kinds of sessions where I encounter this problem,
Stem Mastering Sessions: These sessions are usually around 10 tracks, with heavy over-sampled plugin chains on every track. I’m not sure if this is the single-core spike issue which most DAWs used to have. I’ve gotten around this issue by creating multiple busses and distributing the plugin load, which does seem to help. Nuendo’s meter hits 100% and I usually resort to turning off oversampling on plugins.
Composition Sessions: These sessions have tons of heavy plugins and a lot of Kontakt instances reaching around 100 tracks. As Benoit mentioned, the MBP’s CPU will be hovering around 20-25% for me, temps stabilise around 65C and RAM usage is around 30-40GB. Once the session starts to crumble, I usually have to start freezing (Nuendo’s freezing is so bad) instances or render out tracks. Buffer is usually at 2048.
But, an even stranger issue I have is that CPU drops below base clock (2.4GHz) sometimes, all the way down to 1.5GHz and the session barely plays. I’m not sure, if it’s a certain plugin in my chain which causes this, but restarting sometimes helps and this happens way too often these days. @benoit - Check your clocking frequency when the load hits 100%.
My post-production sessions don’t use too many plugins, so they usually run fine.
I had a lot these problems on 10.2 as well. I just keep trying to work around these issues. I could use my Mac Mini and test out further, but I prefer working on the laptop.
thanks for the hint Dolfo.
did the test: same thing - NU-project blocked at 100% dsp-load; the processor of my mac shows a total load of around 35%, RAM of 2/3rd’s (1/3 for this specific test-project). multithreading activated in both cases.
wondering about the discrepancy between what the activity-monitor shows and what happens within NU…
…this is the way it looks like here:
I could be wrong, but I think the meter in Nuendo measures things differently from the CPU meters in the OS. So it’s not unusual to see a discrepancy between the two.
…ok… but as much as this? that’s a pretty massive difference…
That is correct. CPU load versus usable audio engine power (which is about keeping the buffers filled) at buffer size XXX is comparing apples and oranges. It’s also extremely system specific.
Instead of going into tech-speak, here’s a recent real world example. My RME MADI card failed. The earlier versions don’t age well. So I bought a new RME MADI FX. Removed my old card and drivers then installed the new RME drivers (different for the MADI FX vs just MADI). Audio performance sucked at lower latency and the load was high overall. Bummed me out.
I removed my UAD-2 drivers (3x PCIe Quads) and the RME drivers. Swapped PCIe slots between one of the UADs and the RME card and reinstalled drivers. Night and day. I can mix very large sessions including many hardcore virtual instruments with nothing frozen at 64 sample buffer, no worries. Totally stable with excellent low latency performance.
What happened? A different connectivity path from the cards to the Chipset/CPU with all else being the same. I run Windows. If you understand motherboard architecture and switch the View in Device Manager to show “Devices by connection” then deep dive into opening the trees, you will see the different paths (with/without PCI Bridges, Root based PCI Express bus, etc). Due to the inexactitude of the PCI and PCIe standards, different cards have different preferences. That may sound strange, but it’s true. I knew which 2 PCIe slots to try switching.
There are so many variables and my system is large with many cards in it, including an expansion chassis, so it’s more prone to such issues. But it illustrates a point about all systems. Keep in mind that in nearly every way, electronically, a Mac is a PC. It just has a “dongle” on the motherboard that locks out any OS except Apple’s. Otherwise, how could a Hackintosh work? It’s easy to mistakenly blame software for electronic issues. I’ve done it many times myself.
Please take no offense. I in no way claim to be a wizard. But I have built many systems and stumbled onto an understanding of motherboard architecture. Sometimes it’s like a fun little puzzle to drive you mad!
OK, a little deeper…
Attached is a typical motherboard block diagram. Take a minute to check it out. A couple of notable items:
On this particular Intel based motherboard (as are ALL Mac motherboards), there are 2 different ways a PCIe card can connect to the CPU.
A. One way is directly to the CPU via a PCI Express Root port. These are at the top left of the diagram. However, note the “Switch”. That is to facilitate needless tomfoolery (at least for us) to allow multiple video cards to integrate together for mega-gaming machines. Yet another variable.
B. The other way for the PCIe card to connect is a less direct route through, in this case, the Intel X99 chipset which usually involves one or more PCI to PCI bridges along the way. These are at the middle left of the diagram.
Some PCIe cards are happier in one spot or the other, usually depending on whether you sacrificed a chicken or a goat. Should that be the case? No. Is it? Yes. Note that pretty much all audio devices use a single PCIe X1 lane, so they can go in any old PCIe slot you find laying around. So many choices!
Also note the variety of classes and paths to get a USB signal in/out of the computer.
A. Notice the mix of USB 2 vs 3.
B. Notice that besides the mix of fully integrated USB 2 & 3 ports directly on the Intel X99 chipset, there is also a 3rd party USB hub on the motherboard with four USB 3 ports that may or may not behave exactly the same as the Intel X99 USB ports. This happens on Mac motherboards as well, both desktop and Macbooks.
The sheer number of variables involved (these are only a few of them) is what leads to people believing in technical voodoo. And the software developer often becomes the voodoo doll as a result. And then, sooner or later, the pins start getting shoved into the poor dolls.
I’m sure this post will get this thread banished to the Geek Forum. I’m just trying to shed some light on why the results vary so much between users and their systems. This is why dedicated builders settle on certain components they find get along better with each other. And that’s why Mac users enjoy the good life. Because “It Just Works!”
Oh, wait. That is soooo 20012, isn’t it?
…thanks for those great points Getalife2. remembers me of my around 15 years with windows-machines - and why i also left those behind me!
I have to add… this is not a PC based problem…
it’s an Intel based problem (AMD has it’s own problems as well)
many chipsets are weird in it’s performance if not connected in the right way
some PCIe slots are different,
look the same but do not work the same and sometimes a SSD used on a M.2 slot disables another PCIe slot and vice versa
and the USB settings and chipsets are another mysteria
and this affects the MAC as well
Intel had some bugs inside the USB implementations over the last 10 years
these were never fixed for some systems,
many mainboard vendors fixed these bugs over time with new BIOS versions or new drivers
but on some MAC’s these are still problematic but unfortunately not for the average user
since Win10 2004 (I think) it is possible to get drivers for chipsets with Windows Update Service and this solved some problems
but I don’t know if this has happened for the MAC’s
do you use a USB-C to Ethernet adapter for Ravenna?
Yet here you are asking about non-optimized performance on a Mac…
do you use a USB-C to Ethernet adapter for Ravenna?”
…yep, no choice… it’s actually working great, i have a much better performance with ravenna than with any other usb-based-interfaces i ever had.
and: i don’t notice any differences against the system i had before (straight ethernet).
"Yet here you are asking about non-optimized performance on a Mac… "
…used to be a hardcore pyramix-user, working with extra-built- and soll-called optimized windowns-machines provided by my good friends @ merging. and, well… at a certain point, i just gave up. i’m btw. badly missing the incredible pyramix’ editor, but moving to the mac exclusively was a huge relief.
and, yes, i try to optimize my macs… whatever that means.
A bit late to the discussion but a couple of points struck me. I apologize if I’m stating things you already know.
…wondering why my processor barely shows 35% of use while my nuendo-project stutters @ 100% dsp-load
Are you looking at overall processor load of 35% or at the individual core usage ? I ask because in a Nuendo session here on a quad-core i7 Mac with hyper-threading the odd cores 1,3,5,7 are at approximately 85% and the even (virtual) cores 2,4,6,8 are at 10% which will show an overall processor load of 44% approximately. For simplification - to a first approximation let’s use the assumptions that the remaining 15% load on the heavily used cores will be used first and the barely used cores will not be additionally loaded at all. An assumption but not too far off.
There’s very little headroom left, only 7.5% (OS-calcuated) -
[(15% *4 of the 4 useful cores) / 8 cores OS calculates]
But in reality less because the distribution among the four highly loaded cores is never completely symmetrical and one will overload first. In simple terms - Nuendo’s usage meter and overload point will always be significantly lower than the actual total CPU loading, you need to look at how each core is loaded. When one overloads you get the issues you’re having.
Hope this helps,
This is what I figured out as well. The 100% load we are seeing is due to a single core overloading. In my post above, I mentioned the stem mastering (10-track) example above, where with very little track count, I can overload Nuendo by just having a lot of oversampled plugins on a single track.
Logic had a similar single-core overload issue and I believe there is a video out there that explains this, where DAW’s can’t distribute a single track’s load over multiple cores.
The way I distribute the load is by having a serial chain of tracks: Master Buss 1 > Master Buss 2 > Stereo Master
I then split the plugins between these tracks to pull down the CPU usage.
Smart solution to your issue!