Pros and Cons of using 1 VST instance for multiple parts of same instrument?

I’m wondering about the pros / cons of using one instance of a VST instrument (e.g. BBCSO trumpet) with multiple midi-channels for different players of the same instrument (trumpet 1, 2 and 3) vs creating 3 different instances of the trumpet VST and assigning each part to it’s own VST instance? Thanks!

Sorry if I wasn’t clear in my first post. Maybe these screenshots will help? In the top one I’m only using 1 vst “slot” (“instance”? sorry, not sure of the terms) and giving each trumpet part a unique MIDI channel. In the lower one I’m creating 3 different vst “slots” with only 1 MIDI channel each.

I may be misunderstanding you here but as the BBCSO is not a multi-timbral player, you have to create a new instance for each instrument anyway. Unless of course you use the NotePerformer playback engine plug-in, which many of us would recommend, or something like Vienna Ensemble Pro. As a very general rule (haven’t carried out formal tests with the BBCSO), fewer player instances means less memory used and easier management of your setup. I quickly moved away from using the BBCSO in stand-alone mode for orchestral scores.

1 Like

Yeah - we need to be somewhat picky about terminology for a minute. If it is truly single instance - like you have the VST set to “Omni” and Dorico sending trumpet one, two and tree to the same instance on different channels - you might as well have used one channel as being a single instance certain changes to one can effect all.

If it’s a single multi-Timbral vst with multiple instances of trumpet inside of it - you’re giving the vst designer some options like sharing a reverb that might save some memory. But if that’s your main concern- better to address that IMO. There’s usually a major time advantage to upgrading that keeps blessing you over and over.

The biggest practical difference for me is when a multi-Timbral vst lets you do interesting automation or creative effects.

With a single instance of the vst, with a single instrument loaded, set to omni, and Dorico sending different instruments/signals to it on different midi channels, you will see that the legato patches will most probably have issues (if the library doesn’t support polyphonic legato), and you will hear only one note at a time…(I hope I didn’t misunderstood what you asked)

As mentioned BBCSO can’t be set up this way, but I do it sometimes with Kontakt and Sine (Orchestral Tools) which does allow it.

Pros - easier to manage everything all from one plugin window, if you need to make adjustments or see what each one is doing, it’s nice to see it all in one place. You’ll have to be sure to set up a multi-channel output from the player (both Kontakt and Sine can do this) in order to spread each instance across the respective instrument channels in the mixer for separation, otherwise they will all come out of one channel. Also it is nice to be able to save a multi-instrument setup that you may use a lot, making it easier to recall something like a full woodwind section in one player for example.

Cons - at the same time can be a little more work to set up at the start, and sometimes does get a bit confusing and opaque when you have multiple instances inside a single player, so when you go digging to make changes you have to remember where you put it. Both methods work for me but usually I will go with separate instances mainly because it’s faster for me to set up (no fussing with multi-channel outs) and easier to see in the play tab all my instances at once.

Performance-wise I have not noticed a significant difference between the two workflows.

Sounds like in your specific case, the advantage to fresh instances for each player or section is that you could pan each one independently. If the instrument has built in staging and reverbs, you could give each voice unique settings. You could give each player/voice their own strip, effect plugins, and AUX send settings in the Dorico mixer. You could get more control over exactly where each voice sits in your mix.

If you don’t want or need any of that, and the instrument you’ve chosen doesn’t apply fancy auto-legato effects that force the instrument to be mono-timbral (only play a single note at a time), there’s nothing wrong with sending multiple players to the same instrument/plugin/channel. Just make sure the instrument is set to supply enough ‘voices’ so nothing goes missing from the mix.

As for performance/memory…it shouldn’t make much of a difference unless you’re running very processor intensive instruments and/or you’ve a very weak computer and big score.

Overall plugin efficiency, and the eternal debate over one multitimbral instance of something like HALion/Sonic, vs lots of individual instances with only one voice/player each, etc…

It depends on the plugin, the sounds called up, host routing, and what if any effects you use to be honest. For some it makes no difference whatsoever, because all the ‘instances’ of something like HALion/Sonic 7 typically ‘share’ a common engine/memory/etc under the hood. I.E. If two different instances of Sonic are using the same violin sound, I believe they are integrated and smart enough to share the same sample cache memory pool where applicable. The HALion engine as a whole (all instances combined) tries the best it can to optimize the capabilities of your system before it sends sound out through the host.

Only way to know for sure is try things and run your own tests. In general, don’t worry about it unless you aren’t happy with the sound and want to try some different options, or the system starts choking, then you might bother to track down more resource efficient ways to set it all up.

What can matter, sometimes, depending upon your combination of plugins, and your system, is multi-threading over multiple CPU cores.

I.E. If you had 6 really processor intensive synth sounds with loads of layers each in a single HALion Sonic instance, and they all were set to use a single common stereo audio fader on the Host (Dorico) mixer, it ‘might’ bottle neck one of your cores. Spreading those same 6 synth sounds over 12 (6 stereo) audio output channels on the host mixer, OR, giving each one a unique HALion/Sonic instance (which would force new audio mixer channels) might lead to more efficient multi-threading over multiple cores for a smoother playback experience. Then again it might not make any detectable difference, or it could even make things worse.

Sometimes we push the limits of our hardware, and just have to ‘try things’ and see what happens. In short, spreading the fancy processor intensive synthy sound out over more mixing faders on the host’s mixing console could theoretically lead to better multi-threading, get rid of some bottle necks and improve system performance. Or, it might not! Sometimes the plugin does a better job at multi-threading its own loads. Sometimes the host does. So many factors can correlate or conflict over the full path of a sound from beginning to end. So ‘try things’ if you start pushing your system limits.

For HALion/Sonic/Groove Agent, Opus, Play, Kontakt, Sforzando/ARIA, and some others, multiple instances vs a single instance using multi-timbral slots over many outputs is ‘usually’ a negligible difference. Once that first instance of plugins like these is up and running, I believe all the rest share a common core engine (Each instance doesn’t need fresh copies of their entire UI and all in memory…they can share a lot of stuff from the same memory addresses), and code exists to attempt to keep all the different instances somewhat in sync, and thus optimize all those instances with the host and OS for efficient memory and thread management. What can sometimes matter is trying to stack too many ‘slots/instruments’ into a single audio fader on the host’s (Dorico in this case) mixing console.

For Plugins like these, the main advantage of going with larger multi-timbral setups from a single plugin, is that you get the ability to ‘channel bounce’ within the same plugin instance. I.E. Set up different articulations in different channels/slots. It’s usually not very resource hungry, again, just make sure the instrument is set to have a higher number of voices that can sound at the same time. Give plenty of head room if bouncing among articulations, as some of them really like to use reverb tails and such that overlap with the next note.

The theory is, the more spread out sounds are across different ‘streams’ before the host ultimately mixes to your master output, then the more flexibility the host can have in assigning things to different CPU cores in your system. It’s mostly trial and error though. No hard rules to follow on these things…

In contrast, some plugins might not share resources as more instances get opened. Each individual instance might be ‘truly sandboxed’ in its own little world. I.E. Opening two instances of something like an older Sonnivox Violin Section plugin (old VST2 code, but still sounds good) might actually require two totally unique memory pools for each instance, two independent sample memory caches, etc…even though it’s the same sound. Shouldn’t be a problem for most systems though, unless you have a pretty weak computer with less than 16gig, a really huge score (lots of instruments needing to actually play sounds at the same time), or both. In a case like this, if you run out of resources, then yeah, by all means, share that Violin Section instance with multiple players/staves, but don’t forget to set it to allow plenty of voices.

For some plugins, the more different ‘instances’ you use, it might take a little longer to ‘launch/load’ a project/score…but once it’s loaded, shouldn’t be much difference. For other plugins, it might not make any difference at all. Again, only way to know for your particular system is to try your own experiments.

For sounds that are more ‘sample based’, and don’t need to do a lot of processing in real time, it shouldn’t matter much either way. They’re essentially D2D sample players, and won’t need much in the way of processor time. You’ll just need to make sure you allow enough ‘voices’ (how many notes a plugin instance, or channels in said instance can play at the same time) for each instrument to ensure notes don’t go missing. I.E. A piano sound that uses a lot of sustain pedal…give yourself plenty of voices (32 to 64 minimum, and with most modern plugins it won’t hurt to go way higher) so dampers can work, notes can ring when sustain is stomped. For a trumpet sound, 2 to 4 should be plenty unless it’s a very fancy trumpet with layered effects like valve clatter, air noise, etc. For a violin part with occasional double stopping, go for at least 4 to 6 voices, etc. For most modern plugins, it’s better to allow more polytonal maximums than you think you’ll need (if you’re not using them, it shouldn’t rob resources. Only apply ‘lower limits’ if you run into trouble someday on a massive score that needs to play more notes at once than your system can handle, and are trying to make ‘thoughtful voice limiting compromises’ to get a project working).

These days, if you have a computer with lots of ‘slower/weaker/cooler/power efficient’ cores, I think it’s more about balancing out the load over ‘more mixer channels’ if you run into glitches. In theory that allows the host and OS to multithread better across the many mixer faders and put more cores to work before the main core out to the mains gets bogged down. Newer processors make a priority of using as little electricity as possible, not wasting energy through heat, etc. They’ll pump and throttle a good deal to conserve energy and avoid producing extra heat, so the more you can do to spread loads over as many cores as possible, in theory, that should run smoother (again, every system, project, and scenario can be unique).

At the same time, some systems might have fewer cores, but they might be quite speedy and powerful cores (that also drink more power, and make more heat). I.E. some of the older intel processors can be set to run wide open (full voltage/maximum clocks) all the time, they get and stay hot, but can take on quite a load of tasks and brute force right through heavier loads before getting bogged down.

With all that in mind, it’s hard to say there is a hard and fast rule to follow. Every system can be different. Every score can be different. So…try stuff and see what you come up with :slight_smile:

Yes, RAM and CPU use are big issues. I’m almost maxing out my 32gig of ram and my i7-10750H processor

Thank you! That is a lot of very helpful insight about how all of this works under the hood!

Yes, I do sometimes get lost when I have 30+ instruments in the VST tab! I mostly write for concert bands and the percussion kit alone can take up 10+ vst slots. It’s interesting that you haven’t really noticed a difference in workflow or playback. Perhaps I just need to bite the bullet and get a bigger / newer machine

hmmm… seems like i’ll have to play with the legato patches in BBCSO and see what happens. Thanks!

haven’t tried anything that complex (yet!) I’m mostly creating demo audio for pieces to be performed live

not sure if you’re using BBC Pro? That’s real system resources hog. At the current sale price, it’s not much at all for me to upgrade from Core but I know my system wouldn’t run it. To say nothing of needing another hard drive to store it.

yea, I’m using BBCSO core too. I didn’t see enough benefit for me in just adding all the extra mic positions and some articulations I’ll likely never use.

One thing that might help…

If it’s a big score with a LOT of sounds playing at once. Gain stage everything on the softer side. Give the mix some room to breathe. A lot of the modern instruments try to force us to work with very hot/loud signals that can be a nightmare for us non-mixing engineers to manage a mix that has any ‘natural depth and dimension’ to it. Cinema and pop music mixing is notorious for using LOUD instruments, and then using a series of compressors to ‘simulate’ dynamics using a lot of psychoacoustical theory, get it under control, and sounding good in cheap ear buds, or modern inexpensive surround sound devices.

Meanwhile, when we compose we just want something that goes easy on our ears while we ‘compose’, and has some ‘actual’ dynamic range to it when it does come time to render results to ‘share’ with the world.

I don’t know about your particular instrument libraries but some stuff that might help…

If each plugin instance has built in ‘reverb’ effects, disable them. Get a clean sound.

If some of the plugins offer several possible ‘mic positions’, only choose ONE, as some of the more epic sounds might open by default with triggering more than one layer at a time. I.E. If a close stage mic AND one of the mic trees further back in the room is active, then triggering the sound could actually be playing twice as many layers of samples as it would be if only one mic position is activated.

Once you’ve disabled all the built in reverbs, and cut back to a single mic position for each instrument, you can use the AUX Send feature of Dorico to set up some ‘shared’ reverb effects, and then set up your ‘sound stage’. To pull an instrument so it mixes in a way that seems closer to you, send less signal to the reverbs, and have more dry signal. Pan ‘closer instruments’ more to the center so that more of the sound makes it through ‘both’ speakers. To push an instrument further back in the mix, apply more through the reverb send and less dry signal. You might also use more extreme panning to one side (the reverb will likely still leak some signal to both speakers, but in a less prominent/loud way).

Finally, if your library offers them, take advantage of ‘section’ sounds if you’re low on resources, but want to simulate big section sounds. I.E. In orchestra scores, using 2 to 4 ‘solo’ clarinets is perfect.

In a wind band arrangement, we usually want 14 to 30 clarinets, right? So, dial up instruments where a full ‘section’ of clarinets were recorded/sampled playing ‘together’. You might have all the different parts sharing a single instance if you’re out of system resources. Also be aware that if you stack up more players from a single instance/instrument, that special key-switching articulations can conflict, so instead you’ll want to use expression maps with ‘channel bouncing’ techniques, where the different articulations are hosted in independent instrument slots of a single plugin. If you’re really short on system resources, then keep it simple, get rid of the fancy extra articulations (I.E. bouncing between staccato and legato sample layers), and allow Dorico to shape things up by ‘interpretation’ using a single/simple clarinet section sound.

Add a touch of ‘chorus or stereo imaging’ effects to spread the section out and give the clarinets more body over both speakers.

Really, for huge arrangements, sometimes it’s easier to manage a mix going back to the simplest of general MIDI style sounds you have (the stuff that comes with Dorico, using the HALion Sonic SE template, can sound surprisingly good…IF you take a minute to ‘stage’ it in the mix with some shared reverb and chorusing effects.

Another thing to try if you write a lot of ‘thick band arrangements’ and don’t feel like investing the time right now to take a crash course in ‘mixing’…is Note Performer. You can try it for free, and it’s not expensive at all to own if you like it. With NP you’ll make some compromises on individual instrument quality in its simplest state, but it’s pretty much plug and play, usually sounds pretty darn good with larger arrangements of acoustical instruments, and is quite efficient on system resources. Optional maps exist for NP that can optionally use some of the big instrument libraries too.

NP can take a lot of the hassle out of using portable computers and modest desk top builds, that either don’t have much horse-power and memory, or have problems ‘staying cool’ when under big loads.

For what it’s worth, depending on the plugin you’re using, you actually could separate a single instance into independent mixer channels across Dorico to access individual volumes/pans/inserts, all from the same instance. I know both Kontakt and Sine can do this when configured for multi-out, you just select channel numbers and it automatically routes them to the mixer in sequence respectively. The setup can get a little confusing because you have to remember where everything is going, but it’s possible this way to then add individual reverbs and pan them etc while still in one plugin.

1 Like

This is a ‘must’ for more libraries than not (in lieu of key-switching style instruments) if you plan to share a single instrument with multiple staves/players and have the phrases bouncing about playing different articulations. Why? If you have more than one player trying to toggle key-switches for different ‘articulations’ on the same instrument/instance, they can start ‘conflicting’ and gum up the sound in significant ways!

By spreading all the articulations out over different slots/channels in the same plugin instance…in most cases, all those staves will happily just use the channel they need at the moment, and you won’t encounter key-switch conflicts.

Note, when bouncing articulations, it’s usually fine to have them share the same mixer fader on output. It’s rarely necessary to give every single ‘articulation’ of a single instrument his own mixing slot in Dorico; but, it’s nice to know you can if you want to give a given articulation some extra ‘mixing attention/processing’.

1 Like

awesome tips, thank you!!

I did my obsessive reaserch with halion sonic se and concluded that one instance with 16 instruments consumes less resources than splitting the instruments in various instances.

I usually use 130 to 150 voices at some parts of the project and to my surprise the amount of RAM is the less importante variable. I note spikes of CPU consumption when various instruments start playing at the same time, with long releases, flex phraser, etc.

My hypothesis is that Halion handles better the threading if you use just one instance (I have 4 cores/8 threads so I set the core use to 7).

My problem is that when it overloads the CPU it happens in a sec, leaving the asio guard in shame. In a project with a simple intro and an orch. arragement that uses like 90 voices it could do it could do it’s “look-ahead” thingie but… well.

I use a MBP with an apollo x16 in 2048 samples for composing.