I thought I would show the results of some experimenting I was doing while recording earlier.
After reading that the only way to utilize sperate processor cores, was to have inserts on a different track, I set up all of my projects and track archives with the inserts off of audio and instrument tracks while recording. In this case the insert is being treated much like a guitar pedal.
The first two screenshots show the configuration with the inserts on the instrument track. The second two screenshots show the configuration with the inserts on separate FX tracks.
Note on possible confusion: Contrary to the way most set their signal chains up, my order is bottom to top in the project window, and right to left in the mixer. (Other than final groups which are locked to the right before the CR in the mixer.) Think “Guitar pedalboard”.
Maybe this isn’t news to anyone else, but it confirms that I was doing it the…“right” way to begin with. (The separate FX tracks, not the track order.)
I use to think this had the added advantage of being able to easily control which effects were on from a midi footboard, but there is actually a way to do that through a generic remote, when the inserts are on the audio or instrument inserts. It would be nice to get the same performance either way. Though I am so use to it now, and it is visually more immediate this way.
Like you say, it’s no surprise - but good to qualify.
The one thing that you need to take into account, however, is that the audio performance meter is most relatable to the most loaded core. i.e. One core overloads and you get issues. At least that’s my basic assumption.
So doing this test on a low track number is simply raising that meter earlier because a single core is receiving more load- it doesn’t necessarily mean that a fully loaded project would see such a wide difference. But it is well worth being aware of.
i.e. To graph this, let’s say you have 6 cores each with 10 units of CPU available, and an instrument was worth 4 units, and each effect worth 2:-
The test you’ve carried out is the two graphs on the left as a comparison to one another and of course you’re seeing a larger difference in the ASIO meter as in one your maxing out a single core and the meter responds as it becomes the Achilles heel.
Of course, this is only my crude interpretation, and “what’s best” entirely depends on how things fit.
The main issue comes when a core is used for live low-latency monitoring, of course.
Yes, the basic trick is to spread the work over different tracks/channels.
It works because Cubase (to date) only uses 1 thread per Track/Channel. (I think it’s thread rather than CPU btw.)
By splitting the work over many Tracks/Channels, a massively multi-threaded CPU ends up becoming much more load balanced.
So this principle should also work with Group Tracks. To split out a really heavy CPU limited FX chain, route the original audio track into a group or fx track, and that one into another group or fx track and that one into another one, etc.
So as @oqion was trying to say (I think), to really thin out a track’s FX, move each FX to a separate a separate FX or Group channel and route the signal through all of them in the desired order.
p.s. I’m not convinced that it’s right to call it ASIO load. To my knowledge, ASIO only comes into play when doing digital/analog and analog/digital conversion. Isn’t this entire discussion is all around CPU usage while doing DSP Audio Processing entirely in the digital domain? The Cubase popup calls it Audio Processing Load
p.p.s. My hardware guitar pedalboard also has parallel routing chains (in addition to sequential one’s), so sequential and parallel routing both exist in software and hardware
When I moved from Logic back to Cubase I read that the ASIO meter wasn’t directly attributed to CPU load, but more with the load on the ASIO engine and what it can handle within the buffer allowed and driver efficiency.
No doubt there’s semantics in discussing whether how much is directly attributed to CPU, but I do have a mental note tucked away that it’s not a direct CPU meter.
Actually, just searched the forums, and this was the thread I read:-
Must admit, I don’t overly think about it much anymore - I did when first switching or getting a new CPU, I just think to myself nowadays “Ahhh, I can bounce it down if CPU goes mad!” Keep it simple. lol
“VST Performance” is the terminology I should’ve used, clearly. I call it ASIO meter for some reason
But I also suspect that the meter would behave very much the same and the rest of the discussion would also be the same, even when not using an ASIO driver at all? (Haven’t bothered to test that, though).
You’d expect that, I’ve used 4 different audio interfaces on my machine but never thought of comparing really. One was an RME too, which would’ve been interesting as the others were a Focusrite Saffire 56 (Firewire), UR22C and a Behringer UMC1820.
So would’ve been quite interesting as there’s a mix of different interfaces, RME renowned drivers, Steinberg’s own… and urm… behringer. (Which is my current workhorse, and actually doing an ok job - honest! ).
No. It ever uses ASIO. Since 25 years I guess.
The Windows drivers get wrapped with “Generic Low Latency” driver and simple onboard audio systems performed very poor, especially on many Laptop Computers.
DSP stands for Digital Signal Processing.
Anytime you process a waveform as bits, it’s “DSP” so, very much of Cubase is DSP.
Here is a textbook on the subject.
Unfortunately the acronym is also used for Digital Signal Processor which is typically built using MOS integrated circuits and is in fact hardware. This is even more confusing when a hardware device performs DS Processing in addition to having a DS Processor. That processing of the digital signal is done in a Processor chip. So it too is referred to as a DSP. For this reason, in the software world, as I know it. DSP is reserved for discussing the algorithms, and the others are referred to as “Converters” or “Hardware Processors”, but what a hardware Processor does (NOT the processor itself), is still called DSP.
This isn’t an uncommon miscommunication. Nico5’s use of DSP was correct, st10ss’s use of DSP is also correct.
I think you have a valid and important point. Your diagrams are explanatory, though the upper right quadrant probably isn’t the sort of allocation that would occur, it is likely much worse than that. But the lower right is the meat of your point, and I see what you mean.
The thing is, 3 simultaneous signal paths maxes out my portable rig in the “all on one track” configurations (lower right Serial Arrangement). 2 is possible. But I can get 4 at once just fine with “parallel Arrangement”. I believe it’s a matter of throughput.
Thread throughput is about FPF Fastest Process First. But you can’t process the fastest one first if the work isn’t divided up enough to get faster processes. It looks like that Cubase treats Inserts as a way to put more DSP on the same thread, and Tracks as a way to put DSP on different threads, with signal management thrown in.
The thing is, a Track has to have all of the extra stuff that goes along with it. Channel strip, sends, etc. All of that has to take up some memory and some processing to at least check to see if it’s even needed. Putting each insert on it’s own thread should significantly improve performance. I imagine that there is some lower level Object, that Track is a generalization of, that doesn’t have all of the overhead, but still has the thread feature already, and could be used to implement such a scheme.
But it looses the visual benefits when you don’t have the track selected. This could be resolved with little low res icons in the track header that can be clicked to enable or disable the particular insert, and made available as key commands.
This likely hasn’t been done because - “people don’t use Cubase like that” -.
I don’t know, Tom Holkenborg, aka Junkie XL has shown how he renders to audio, anything he isn’t currently writing. His meter jumps up just as much. I have rather powerful machines to work with, and I don’t think that is the issue.
I am rather technical, but still willing to learn. The laptop is:
Intel(R) Core™ i7-9750H CPU @ 2.60GHz 2.60 GHz 16Gb Ram
If you think (A) I am doing something wrong, and should get better performance from that machine, or if you think (B) That is not a powerful enough machine, then
(A) What could I possibly be doing wrong? Please, if you can help improve the performance, I would be very grateful.
(B) Well, that’s the portable rig I have to work with, and likely rather above average to the common user I should think.
I will note that the much more powerful Desktop in the studio doesn’t really do much better while recording.
That’s just it though, there IS signal attached. It’s all about how many of those signal chains you can run at once.