Reverb is inherently a single thread bottle neck. It has to wait for everything going into it to get there, process it, forward it to the next stage etc. Once it gets there, it has to slam through doing the processing and hopefully get it done without big buffers building up at both ends of the process. Sometimes it’s most definitely better in terms of thread distribution to use more instances of the plugin rather than sharing one on another bus. That can also change the sound of a mix significantly.
Oh well…at least you’re not having to use a $4k external unit that drinks $20 worth of electricity a month, only works in real time and only has half a dozen preset options. Set a huge buffer and mix it down if you’re out of head-room then shrink the buffer again if the project is choking.
What else can ya do short of rerouting half the project so it uses more cores, or offload some stuff to a secondary synced up workstation? (In 2021, sometimes it’s better to have two machines with 4 cores each, than a single one with 24 cores)
Know of a way to divide that among different threads and put it all back together again? Lotta folks would like to know the secret.
The app isn’t using more cores because it doesn’t need them, or you’re routing some things in a way that make it virtually impossible to multi-thread the task. If you follow the signal chain from beginning to end, and it’s all tapering down to a single output…there are a ton of things going on in that signal chain that just cannot be done discretely and stay in ‘sync’. At least ‘not yet’, with the current CPU, memory, storage, and audio device designs.
The more stuff you throw at a single bus that is connected to the same end-point at the very end of the signal chain, the closer it’ll get to maxing out a single core.
At least one core will need to be pretty taxed, as that’s the one cobbling all the other mess ‘back together’ that’s been threaded off ‘back’ into your final destination for the stream.
Can Cubase do more discrete processing with more things? Maybe, but time tampering effects like long reverbs of more than a few milliseconds ain’t one of them. A 3 second reverb, while also trying to achieve near zero latency using device buffers of less than half a second and crazy high 21century sampling rates at 64bit dynamic precision? Good luck with ‘multi-threading’ that processor demand on the current ‘system’ architectures! You’ll need to redo ASIO, the buses that motherboards provide for attaching the hardware (USB/Thunderbolt/Etc. ain’t gonna allow it) the audio hardware, how the processors work, and more.
Processor design, and the way it’s configured also has something to do with it for some things. LOADS of things are handled at Firmware, OS, and Driver levels that higher level apps like Cubase have ZERO control over.