Huge template: Getting levels matched, how?

OK so I have managed to get to base A with my Grand Master Template. I have well over 1000 sounds loaded, many keyswitched. All are disabled - and only showing 13% CPU and 5.9 gig RAM in Task manager.
These sounds are drawn from various libraries EW, VSL, Kontakt, Spitfire, Spectrasonics, NI + more.
At the mo, level wise they are a mess and I will soon start sorting out the levels so they are comparable - aurally. I want to do this before using the mixer where I can, using the GUIs for the instruments. This, I believe, is the correct way. If you put garbage in, you get garbage out. It will also free up the mixer for musical purposes.

I found a little treasure of a button in Kontakt, which will bring up or down ALL Kontakt future instances at once (though not retrospective). Go to the Floppy Disk icon, then, with an empty instance tweak the master volume and ‘save as default multi’.

I want to do this absolutely right on the money.

Obviously a section of 30 violins is going to sound louder than a solo violinist (does it really, at all registers?). I am aware that DB wise, our ears are logarithmic. There is also significant disparity between libraries, and sometimes between different instruments in a library. The Halion Orchestral Library has spiccato samples which are far too loud for example.

Maybe there is some kind of engineering science for this, and even tools, maybe a VST or something within Cubase?

I am aware of the orchestral stage thing, that simple amplification brings the instrument forward in the sound stage, panning is the horizontal axis, I can handle this aspect. I am working in 5.1 for the theatre, the ‘reality’ of the orchestral sound stage, is not the most important factor. In fact I intend to play with this reality. The thing that really puzzles me is how do you ‘balance’ (wrong word really) a tin whistle with a Bass Drum - level wise, so that each sounds ‘natural’?

Bear in mind I am more musician than engineer. I have been around DAWS for years, but engineering stuff comes hard.

Like I say, I want this to be professional - right.

Can anyone enlighten me please?


The only answer I can think of is to simplify, I simplified my templates to improve my workflow and focus better and being productive, do you really need a grand master template, what about setting up a small divisi orchestra template and a large orchestra template. And then have other VST loaded but empty ( like omnisphere and play) in those templates so you can flex genre and just load what choices are needed by the project , that’s what I do. I record my own guitars and have a lot of analog gear now so this frees me from all VST rig which is so much more fun and rewarding. Do you only do orchestral? Or are you doing trailers, that would explain why your in 5.1

volume can be an isssue but all VST start in balance, here are a few tips.

Use VCAs and group them by strings, brass, winds, VST instruments, pads, drums, percussionist, hits & stingers, fx, analog, then ride the faders to get a better balance.

Leave 5db to 10’db of headroom on master to begin with to give you room as yo go

Use a brick wall limiter set very light ( 0 or -3 ) On most unruly audio tracks this will keep things under control.

Meters can be deceptive , also use your ear to mix levels, one violin in a daw can be as load as a full section like you noticed. And higher frequencies can jump out. Use low and high frequency cuts as needed.

Thanks for reading, yet…

Yes I do want a Master Template. No I don’t want to cut it down. I also do not want (at this stage) to use VCAs or group tracks yet. First I need to balance the sound inputs, then I shall use group tracks and VCAs for musical purposes.

“All Vsts start in balance” - not my experience here.

My question is what would a pro engineer do to balance/calibrate so many Vsts


I see. That’s your preferred way to go, got it…I would just say then profrpessional engineers use large pro tools mixing boards , in my case I send music to post production facility …

Thanks for the tip, very useful.

I wonder if you are looking for something that isn’t really achievable. Consider how it works if recording real instruments instead of using VSTi’s. Whether recording a full orchestra, a quartet, or a solo piano you’ll almost always want to have the signal be hot enough to be easily used while not being so loud it distorts. In the old analog days it was as loud as possible (to minimize noise) before distorting. With modern digital recording noise is generally not an issue so we record a bit lower. The point is that when recording you are aiming to get a good overall signal level independent of how loud or soft that instrument naturally sounds in the wild. Mixing is where you become concerned with the relative levels of different instruments. Say I was recording a uke and a loud over-driven electric guitar (in different rooms to avoid leakage). I’d aim to have them both end up at about the same level on the recording - even though they are at different levels both in the real world and in the final mix.

But you’re using VSTi’s so the “recording” has already occurred by the folks who made the VSTi. And as you are aware the levels vary a lot between libraries. So what can you do about this? You could just ignore it and set the levels as needed when mixing. Or you could adjust the levels of each VSTi in your template so they are roughly all close to where you’d want them in a final mix. This may or may not be worth the effort. With 1000 tracks it is a lot of up-front work, but when you mix your initial levels will be closer to where you want them. However you will in most situations still need to adjust the levels a bit for most of your tracks. If you are going to need to adjust the levels on 100 faders does it really matter that you only need to adjust them in a 4dB range (because they’re already close) vs. a 10dB range (because the initial levels are further off)? And if that matters, does it matter enough to do all the upfront work to get there? Finally the “correct” levels will vary, sometimes a lot, by the context it is used in (this is the potentially unachievable part). For example the appropriate level for a cello section in a string orchestra vs. a full orchestra will likely be different. Which do you pre-set the VSTi to? Same goes for your Mozart knockoff vs. your Wagner knockoff etc. Also different libraries don’t respond to velocity the same. So maybe the cellos in libraries A & B sound equal at vel 100, and very different from each other at vel 50.

If you do want to adjust levels in your template, in addition to the controls in the VSTi you can also use the pre-gain control in the Mixconsole - which is probably easier than opening 1000 VSTi’s. Also keep in mind that the meters on the Mixconsole show the signal level not loudness which is what we hear. You can use the Loudness Meter in the Control Room and I bet there are some free/dirt-cheap plug-ins for loudness out there.

If it were me I’d use the 80/20 rule and focus on the 20% of the tracks that I’ll use the most and pre-balance their levels and deal with the rest as needed. But I wouldn’t set them close to their final mix points. Instead I’d set them all to be about the same loudness via pre-gain. Then route all of the same sections from different libraries to a Group Channel for that section. So you’d have the cello sections from libraries A, B & C all going to the Cello Group, same for violas etc. Next take all the Groups that make up your orchestra and adjust their levels relative to each other. Now you can switch between libraries (which all sound equally loud) and have them close to where you want.

One side comment. Far more important than how loud something is, the ratio of the direct signal to the reverberant signal determines the distance to the sound source. The reflected signal is what really tells our brains that something is further away.

HI Raino,
First thank you for a thoughtful and comprehensive answer, and thanks for your other post on the Surround Panner. My thoughts have been along your lines.

At present, I am just adjusting the levels coming out of the VSTI’s to make them comparable - they were way off. Also, Spiccato samples in Halion Sonic Orchestra are way too loud for the rest of the same orchestra.

Overall, I have say, twenty violin ‘instruments’ (instances of Kontakt, PLay, VSL, Spitfire) many of which are keyswitched, half of which are ensemble patches. Though I recognise that there will be additional mixing. when I actually get to making music, at this stage I just want to make it so that I can pick a solo violin and it can play back at the levels of its bedfellows (analogy: I can think of this as replacing Smith with Bloggs in the chairs).
Secondly, it would be no good at all if I achieved this for groups of instruments but in the orchestra they did not match so that a flute drowned out a Tam Tam (exaggeration) .
I started this 'calibration last night and it’s easier that I thought.

I decided to design in 'one instance of a player (PLAY, Kontakt, etc) per patch (keyswitched where poss) . I did this so that when I load a particular sound, from its disabled state). I don’t have to load a whole multi. I was thinking of RAM.
It turns out I need not have worried too much. Although it’s not possible to load all your violins into one multi, as some are EW, some VSL, etc. It is on occasions sensible to load instrument collections into one multi. With Cubase’s disable function, all this has virtually no RAM consequences. You could go with an idea of one multi for EW Strings, another for Brass, another for Kontakt Woodwinds, but as soon as you enable one multi instrument, it seems to me that the whole player loads into RAM (judging by the appearance of the instrument). On smaller systems this is not so good,but I digress.

The way I am proceeding is simple enough: Load a couple of instruments, compare the activity VDUs in the project window’s track list, use either the master for the instrument’s player, or the gain for the actual instrument, to broadly aurally balance the sounds, add another instrument, bring that in line too…etc. When it gets too Tutti drop a few. Do this across groups instruments. Make sure I compare levels between major instrument sections too.

I have written four bar scale runs in C for all instruments demoing their ranges. I did think about using one instrument as a kind of ‘gold standard’ to work off, say an oboe, but how to compare the loudness of an oboe with a group of Timpani or Double Basses, Aurally?

Level wise, my understanding is that the signal in digital systems does not need to be as hot as analogue systems, so a tad conservative is fine (just a tad) peeking at about -10DB. Can you confirm?
Honestly, I have largely ignored VDU calibration after the sound is in the ballpark, for former compositions, though I understand basic DB/Loudness.

Your point about “the ratio of the direct signal to the reverberant signal determines the distance to the sound source”.
I think it’s a combination of both. As reverb is traditionally added later to the whole mix, I have been thinking about mainly volume. Of course some samples have it baked in to a degree, then of course there are the mic positions to consider - many instruments having options. It get’s tricky. I think this just stays default for now. Anyways I am not too concerned at creating a resemblance to a stage, I intend to play with reality and work in 5.1. Think Filmic.

Anyways, at the moment approximate parity gives me a level playing field to compose with. Achieving this will take me a few days, as I will be checking expression maps and writing them too, where required. I am getting pretty quick, but even so…

The next stage, will be grouping the sounds in the mixer. My final intention is to disable all the tracks except perhaps a set of goto instruments, then, bring in alternatives and extra sounds, as I need.

I hope to achieve group tracks for musical sections (e.g.Brass) in the mixer. I have not got a clue, as yet, what disabling and enabling tracks will do to the routings!

All this work is not in vain. From this grand template, I shall easily be able to create sub-templates, this modular approach pays dividends for large MIDI compositions.

Oh for a wizard Mr Steinberg!


Yeah, that’s fine. There’s folks who even go much lower than that, like -18dB. Personally, being an old tape guy I go hotter that that (-4dB-ish) because it just feels wrong to see the signal so low. But I am courting danger by doing so. :unamused:

The next stage, will be grouping the sounds in the mixer. My final intention is to disable all the tracks except perhaps a set of goto instruments, then, bring in alternatives and extra sounds, as I need.

I hope to achieve group tracks for musical sections (e.g.Brass) in the mixer. > I have not got a clue, as yet, what disabling and enabling tracks will do to the routings!

Having some goto instruments available is sensible. Then as soon as you create a new project you can immediately start sketching musical ideas.

You probably should investigate sooner rather than later how disabling/enabling impacts different elements of your template. If it does make you change how you want to set stuff up, it’s better to know before you’ve done a bunch of setting up.

Good luck with it.

Thanks for your thoughtful replies

Draw out the positions of where you want the various instrument groups to be, then send each group to a group track and pan that (and set volume) accordingly. Then look into convolution reverb files that are built based on being placed in various locations of a orchestra pit, then load the relevant patch in a suitable reverb vst on each group track. This way you should be able to achieve a realistic sound stage.