Inserts on FX rather than on Instrument track provide much greater performance

I thought I would show the results of some experimenting I was doing while recording earlier.
After reading that the only way to utilize sperate processor cores, was to have inserts on a different track, I set up all of my projects and track archives with the inserts off of audio and instrument tracks while recording. In this case the insert is being treated much like a guitar pedal.

The first two screenshots show the configuration with the inserts on the instrument track. The second two screenshots show the configuration with the inserts on separate FX tracks.

Note on possible confusion: Contrary to the way most set their signal chains up, my order is bottom to top in the project window, and right to left in the mixer. (Other than final groups which are locked to the right before the CR in the mixer.) Think “Guitar pedalboard”.

Screenshot 2022-02-05 013743
Screenshot 2022-02-05 013813

Screenshot 2022-02-05 013651
Screenshot 2022-02-05 013550

Maybe this isn’t news to anyone else, but it confirms that I was doing it the…“right” way to begin with. (The separate FX tracks, not the track order.)

I use to think this had the added advantage of being able to easily control which effects were on from a midi footboard, but there is actually a way to do that through a generic remote, when the inserts are on the audio or instrument inserts. It would be nice to get the same performance either way. Though I am so use to it now, and it is visually more immediate this way.

Your screenshots say nothing and the text is very confused…
What are you trying to say in the long run?

Like you say, it’s no surprise - but good to qualify.

The one thing that you need to take into account, however, is that the audio performance meter is most relatable to the most loaded core. i.e. One core overloads and you get issues. At least that’s my basic assumption.

So doing this test on a low track number is simply raising that meter earlier because a single core is receiving more load- it doesn’t necessarily mean that a fully loaded project would see such a wide difference. But it is well worth being aware of.

i.e. To graph this, let’s say you have 6 cores each with 10 units of CPU available, and an instrument was worth 4 units, and each effect worth 2:-

The test you’ve carried out is the two graphs on the left as a comparison to one another and of course you’re seeing a larger difference in the ASIO meter as in one your maxing out a single core and the meter responds as it becomes the Achilles heel.

Of course, this is only my crude interpretation, and “what’s best” entirely depends on how things fit. :slight_smile:

The main issue comes when a core is used for live low-latency monitoring, of course.

1 Like

Yes, the basic trick is to spread the work over different tracks/channels.

It works because Cubase (to date) only uses 1 thread per Track/Channel. (I think it’s thread rather than CPU btw.)

By splitting the work over many Tracks/Channels, a massively multi-threaded CPU ends up becoming much more load balanced.

So this principle should also work with Group Tracks. To split out a really heavy CPU limited FX chain, route the original audio track into a group or fx track, and that one into another group or fx track and that one into another one, etc.

So as @oqion was trying to say (I think), to really thin out a track’s FX, move each FX to a separate a separate FX or Group channel and route the signal through all of them in the desired order.


p.s. I’m not convinced that it’s right to call it ASIO load. To my knowledge, ASIO only comes into play when doing digital/analog and analog/digital conversion. Isn’t this entire discussion is all around CPU usage while doing DSP Audio Processing entirely in the digital domain? The Cubase popup calls it Audio Processing Load


p.p.s. My hardware guitar pedalboard also has parallel routing chains (in addition to sequential one’s), so sequential and parallel routing both exist in software and hardware :slight_smile:

When I moved from Logic back to Cubase I read that the ASIO meter wasn’t directly attributed to CPU load, but more with the load on the ASIO engine and what it can handle within the buffer allowed and driver efficiency.

No doubt there’s semantics in discussing whether how much is directly attributed to CPU, but I do have a mental note tucked away that it’s not a direct CPU meter.

Actually, just searched the forums, and this was the thread I read:-

Must admit, I don’t overly think about it much anymore - I did when first switching or getting a new CPU, I just think to myself nowadays “Ahhh, I can bounce it down if CPU goes mad!” Keep it simple. lol :slight_smile:

“VST Performance” is the terminology I should’ve used, clearly. I call it ASIO meter for some reason :stuck_out_tongue:

1 Like

agreed, and that’s the most important.

But I also suspect that the meter would behave very much the same and the rest of the discussion would also be the same, even when not using an ASIO driver at all? (Haven’t bothered to test that, though).

1 Like

When no ASIO driver is used, Cubase will not play anything.

1 Like

True for Cubase 11 on Windows, but if my memory servers me right, older versions could use Windows sound drivers (with horrible latency)

You’d expect that, I’ve used 4 different audio interfaces on my machine but never thought of comparing really. One was an RME too, which would’ve been interesting as the others were a Focusrite Saffire 56 (Firewire), UR22C and a Behringer UMC1820.

So would’ve been quite interesting as there’s a mix of different interfaces, RME renowned drivers, Steinberg’s own… and urm… behringer. (Which is my current workhorse, and actually doing an ok job - honest! :slight_smile: ).

1 Like

haha - glad you’re having success with the big bad B. :slight_smile:

I’ve gone from Steinberg’s MR 816csx using Firewire to RME drivers (using the relatively inexpensive Digiface USB ADAT /USB2 converter box.

1 Like

No. It ever uses ASIO. Since 25 years I guess.
The Windows drivers get wrapped with “Generic Low Latency” driver and simple onboard audio systems performed very poor, especially on many Laptop Computers.

1 Like

ah interesting - didn’t know it was still ASIO under the hood.

However I believe my main point was still valid, that ASIO is input/output - not all DSP Audio Processing in the DAW?

There is no DSP in Cubase…
A DSP is a hardware device.

guess you’re right - I should just call it “Audio Processing” like Cubase :slight_smile:

DSP stands for Digital Signal Processing.
Anytime you process a waveform as bits, it’s “DSP” so, very much of Cubase is DSP.

Here is a textbook on the subject.

Unfortunately the acronym is also used for Digital Signal Processor which is typically built using MOS integrated circuits and is in fact hardware. This is even more confusing when a hardware device performs DS Processing in addition to having a DS Processor. That processing of the digital signal is done in a Processor chip. So it too is referred to as a DSP. For this reason, in the software world, as I know it. DSP is reserved for discussing the algorithms, and the others are referred to as “Converters” or “Hardware Processors”, but what a hardware Processor does (NOT the processor itself), is still called DSP.

This isn’t an uncommon miscommunication. Nico5’s use of DSP was correct, st10ss’s use of DSP is also correct.

1 Like

Skijumtoes,

I think you have a valid and important point. Your diagrams are explanatory, though the upper right quadrant probably isn’t the sort of allocation that would occur, it is likely much worse than that. But the lower right is the meat of your point, and I see what you mean.

The thing is, 3 simultaneous signal paths maxes out my portable rig in the “all on one track” configurations (lower right Serial Arrangement). 2 is possible. But I can get 4 at once just fine with “parallel Arrangement”. I believe it’s a matter of throughput.

Thread throughput is about FPF Fastest Process First. But you can’t process the fastest one first if the work isn’t divided up enough to get faster processes. It looks like that Cubase treats Inserts as a way to put more DSP on the same thread, and Tracks as a way to put DSP on different threads, with signal management thrown in.

The thing is, a Track has to have all of the extra stuff that goes along with it. Channel strip, sends, etc. All of that has to take up some memory and some processing to at least check to see if it’s even needed. Putting each insert on it’s own thread should significantly improve performance. I imagine that there is some lower level Object, that Track is a generalization of, that doesn’t have all of the overhead, but still has the thread feature already, and could be used to implement such a scheme.

But it looses the visual benefits when you don’t have the track selected. This could be resolved with little low res icons in the track header that can be clicked to enable or disable the particular insert, and made available as key commands.

This likely hasn’t been done because - “people don’t use Cubase like that” -.

You can disable and/or bypass individual inserts in the Mix console:

And you can also do it with an external midi controller via the Generic Remote

There are systems out there using hundreds of tracks with plugins on all of them, and you think it is a Cubase issue that your system is performing poor?
I guess it’s your system’s problem.

I guess you will say that we should disable not needed inserts. But with VST3 plugins this is not needed. With no signal attached, VST3 plugins need no CPU cycles.

Yea, I know. If you want to see the timeline you can see that in the second screenshot.
But it could look something like this:

Effects.jpg
Obviously copy and pasted from Line 6.

That way you would have all of the visual information at once.

I don’t know, Tom Holkenborg, aka Junkie XL has shown how he renders to audio, anything he isn’t currently writing. His meter jumps up just as much. I have rather powerful machines to work with, and I don’t think that is the issue.

I am rather technical, but still willing to learn. The laptop is:
Intel(R) Core™ i7-9750H CPU @ 2.60GHz 2.60 GHz 16Gb Ram
If you think (A) I am doing something wrong, and should get better performance from that machine, or if you think (B) That is not a powerful enough machine, then

(A) What could I possibly be doing wrong? Please, if you can help improve the performance, I would be very grateful.
(B) Well, that’s the portable rig I have to work with, and likely rather above average to the common user I should think.

I will note that the much more powerful Desktop in the studio doesn’t really do much better while recording.

That’s just it though, there IS signal attached. It’s all about how many of those signal chains you can run at once.