Help with realistic mock-ups

I’m wanting to improve the realism of my orchestral mock-ups. I have NP which is great for a demo, but I’d like to use some of my nicer sample libraries, like CSS Strings. I want to keep everything in Dorico without exporting to a DAW, so I’m looking for some help setting up expression maps (yes, I’ve watched the videos) and hearing what sorts of tips and workflow others have used. I’m keen to see if I can make a backing-track orchestration that’s realistic enough to fool the average listener.

I’d appreciate any help from some users that have done this sort of thing, either here in public comments, or a Zoom session if that’s possible (PM me).

Thanks in advance.

2 Likes

I have done many mockups/film scores in Cubase using virtual instruments entirely. For me, there are four general guidelines to improve realism of mockups (this is not Dorico-specific, but in general):

  1. Avoid robotic overly-quantized playback. Part of the issue when doing mockups in notation software is that the software plays everything back exactly on the beat, when you enter with the mouse. An orchestra that sounds too precise to be human is going to sound artificial, naturally. There will always be slight offsets in timing in actual performance (quantization errors) that humans naturally add, and this is part of what humanization algorithms in some notation programs try to emulate. It is ideal for realism if you enter all of your notes with the MIDI keyboard, playing them in (so you get the natural error in your playing, and then adjust the notes that are too far off due to errors. Often I will slow down the tempo temporarily to “cheat” while doing this. De-quantizing can be done after the fact, but is a much longer process, and the results tend to be less successful.

  2. Ride the CC’s (ex. CC1, CC11). If you look at really good mockups, they don’t just leave the MIDI CC’s set to a fixed position. They shape them all the time, shaping each individual phrase like a performer would. This video has a good example: Make MIDI Sound Real: Creating Orchestral Mockups, Part 2 - YouTube (some composers add even more than that!)

  3. Reverb / getting everything into the same “space”. This is really important when you are mixing different libraries recorded in different spaces with different characteristics. The most important thing is to have a high quality reverb that gives you stage positioning capabilities, and ideally a quality algorithmic reverb because they do not color the sound the way convolution reverbs do. For this, I use and recommend EAReverb 2. I use it for all my positioning and I am able to easily mix both wet libraries (such as Spitfire Chamber Strings) and very dry libraries (VSL and Sample Modeling) in a way that makes them sound like they are in the same space, and where you get a clear sense of depth. Even aside from positioning, for a person doing orchestral mockups, there is no single more important plugin to buy than a good reverb plugin. The built in reverbs in DAW software are not good enough.

  4. Balance. Obviously, it is going to sound a little strange if your orchestra is not balancing the way a real orchestra would. When working in Cubase, I usually create group tracks so that I have all of my strings going through one channel and have a master strings fader, a master winds fader etc, which helps to adjust overall levels between libraries. I’m not sure that Dorico has group tracks yet, but if you use something like Vienna Ensemble Pro as a VST host, you could use the master faders in there. When balancing instruments, you also have to consider the behavior of the instruments in different registers, because most samples are normalized, so you can have things like the low flute register being much louder than it would be in reality. Also, keep in mind that reverb (item #3) will affect balance. With my dry sample libraries, there often seems to be a huge difference between the quiet sounds and the loud sounds, but positioning the sound and using reverb has a tendency to compress the dynamic range, at least the perceived dynamic range. So, when it is dry, the default ppp note for a trumpet may sound too quiet and the default fff note sound too loud, but when you add the reverb to position it properly on the stage, the dynamic range is more constrained and realistic. Reverb also can change the perceived relative balance between the choirs of the orchestra and between individual instruments a bit, in my experience. As a result, I only fine tune the balance once the reverb is right and not before, otherwise I typically have to redo some of the shaping/balance adjustments.

You can also do EQ for things if you have good mixing chops, and tempo shaping helps (lots of small tempo variations to help the music breathe, riding the tempo track like riding CC’s), but I think the four things I listed above are the main items.

1 Like

Thanks mducharme, this is very helpful! I’m definitely going to be following this thread.

Another trick some people use for the strings is layering - ex. adding a solo violin to a violin section, as this can add more detail. The ideal way of doing this is by having two divisi violins I sections and having two divisi violins 2 sections that are slightly offset, and then the solo players added that are slightly offset. If you listen to a string orchestra, if they are playing some broad lyrical slow melody legato, they are not all going to change notes at exactly the same instant, there is a bit of a blur there because some players change notes a fraction of a second off where the other ones do, and there are slight tuning adjustments after the arrival. In samples you lose that blur because when they do the recordings for the sample libraries, there would be a click track in the musicians headphones saying “everybody play G5 mezzo-piano in 3…2…1…” and everybody will play it exactly at the right instant due to the click. But if they are playing broad, lyrical material, they will never be that super-precise, and the blurring is a nice realistic effect.

I don’t always bother to do that myself (it depends on the material), but if I was really trying to squeeze all I could out of a string line, I would definitely do some of that.

The quick and dirty way is to play the music for less-than-average listeners or rely on a PA system or room acoustics that you can blame for the less than optimum quality. :unamused:

More seriously, I sense that most of mducharme’s helpful suggestions are going to be a lot more complicated unless you use a DAW.

I’m not sure about that. I usually record parts in live, so offsets would already be humanized (ooo… would be nice to have a random “humanize” function…). It’s pretty easy to add a reverb. And can’t you record in CC changes using a fader? I think this is all pretty feasible stuff.

At the risk of oversharing, I’m going to post a demo here, first in NP, then making some of these changes. Stay tuned!

You know there are Humanize settings in Playback Options/Dynamics and Timing already, right? Pretty basic, but at least a start.

For reverb, I usually have at least four instances - one for strings, one for winds, one for brass, and one for percussion, plus a final tail at the end on the master track to tie everything together. I may have extra instances if I am using two different strings libraries (or winds/brass/perc libraries) together recorded in different halls etc., simply to better adjust the positioning. That gives you the sense of depth, as the further you move back on the stage, the more reverberant the sound will be. I actually go a bit overboard with EAReverb 2 in my DAW and I have something like 20 instances - one for violins 1, one for violins 2, one for violas, one for cellos, one for basses, one for each of the woodwinds players and brass players, and four different instances for percussionists. I send all of the sound in my case through EAReverb 2 (using it as an insert effect), so this takes care of my panning as well.

Cubase also has something like this (non-automated) - it has a powerful quantize function that has an option to do a random quantization error with a certain number of midi ticks (essentially a humanize function). The notes end up getting shifted by a certain percentage from their ideal quantized locations. The only issue with taking material that is 100% quantized and robotic and running the random quantize is that the quantize function is going to offset each note by a random amount (within the limited number of midi ticks) in a random direction without taking into account the phrasing. Real performers won’t introduce timing errors note by note, randomly deciding for each note whether to move it too early or too late, like the computer does. For instance, they probably wouldn’t play the first note 200ms early, the next 200 ms too late, the next 60ms early, the next 150ms late, but that function in a computer will because it doesn’t typically consider the phrasing. (I’ve maybe exaggerated the amount of error there, but you get the idea).

When I’m stuck with 100% quantized material and have to humanize it in a DAW, I always first do the random quantize, and then adjust the results manually with my mouse and play it back over and over again and listen for timing errors that a human would not introduce. Using less of the random quantize seems like it might work, but then it often feels to me like it isn’t enough. Once you do introduce enough that it starts to sound human, it easily starts to sound like a sloppy human in places because you end up with the ratio between the durations of the successive notes of the phrase being inconsistent with one another. A computer adding humanization is something that can work, but I think it has to take into account the phrasing, given that a performer is likely to play a phrase in a way that the ratio between the duration of the notes is pretty rhythmically accurate (such that if you heard that performer’s line in isolation, it may sound correct) but it is only in the coordination with the other parts and performers (or against a fixed click track) that the errors can be heard.

I think ducharme has the bases covered! I’m trying the same thing, though I’m using Cubase for the realisation as I know it so well. The main extra thing I want to say is that, for me at least, hearing a near-realistic playback leads me quite often to refine the phrasing and dynamics. So it’s a two way street.

The main risk with sampled instruments is that they can easily sound like the players simply can’t be bothered. The only way to overcome this is to make the same interpretation decisions the conductor and players would make for each phrase and sometimes even each note. This is really painstaking work. Even then, it’s not possible to achieve perfection.

I don’t know whether Dorico caters for this, but in Cubase it’s possible to delay everything on track by a few milliseconds, or to advance it similarly. I find this helps when combining arco string passages with pizzicato instruments. It avoids the arco strings sounding a bit ‘lazy’.

I almost always end up using EQ on at least some instruments. Spitfire Chamber strings violins, for example, can be excessively bright.

It’s also worth considering some very gentle ‘glue’ compression to pull everything together. Sometimes this helps, sometimes it doesn’t. I use the ‘Magic Death Eye’ stereo compressor plugin for this, with the gain reduction meters scarcely moving.

This is a very good point. Some libraries have a tightness controller which allows you to introduce some variation in intonation across the section. This can be quite effective too!

Spitfire has a good list of tips for mixing samples from different halls/sessions.

Reverb is an interesting one, they recommend a different reverb for shorts and longs. Another complication I’ve not heard mentioned is that a solo instrument (e.g. clarinet) is sampled in an empty studio, while the string sections are filled up with the string players. Meaning, greatly different reverb characteristics. You can hear it in libraries that are in larger studios, such as BBCSO or all the rest of Spitfire which is in Air (a horrible hall for reverb, but they exploit it). Reminds me when I auditioned in Davies Hall in SF years ago, I almost dropped my instrument when I heard accompiament a second later. It was myself bouncing back - never happens with a sound absorbing audience in there. Anyhow something to be aware of which is to pay attention to reverb, mostly it’ll wash out.

Otherwise put the most work in the strings. Twenty scratchy cat guts sounds terrible over a mic. All the solo woodwinds sound great, brass punch through and sound great, but the strings take a ton of work. Utilize the articulations like crazy - even if not musical, but just to improve the performance.

I’ve always had difficulty getting oboes to sound good. How do you achieve that?

To be completely honest, 99% of the time I’m just too lazy for this. I’m writing for actual musicians the majority of the time so the mockup has to be decent enough for them to get the idea, but won’t ever be used as a substitute for an actual recording or performance.

I’ve mostly been experimenting with Play menu settings, Aria settings, and Mixer settings. I’m still pretty new at dabbling with MIDI and mixing, but it is pretty amazing what a few simple tweaks can do. Obviously solo parts with slashes aren’t playing back, but as an example here’s a piece for 5 saxophones I wrote last summer in Dorico, using Garritan Jazz and Big Band saxes.

Here’s an MP3 of it that was just flat with Dorico defaults.

Here’s an MP3 of the same file straight out of Dorico with about two minutes worth of work loading plugins into the Mixer, and tweaking Play and Aria settings. No manual adjustments made to any Dynamics, Velocity, or any other CC in Dorico. (A few CCs changed in Aria that affect the instrument sounds.)

Here’s an MP3 of us actually playing it for anyone interested. Pretty low-fi recording with lots of little performance errors as we all did it remotely from our homes and 3 of the guys just used iPhones, but you can get the idea.

Most of the time, that’s the sort of thing I need to do. Just make a quick mockup that’s as realistic as possible without spending time drawing in and shaping CCs. Does anyone have any suggestions of things that would help that second file straight out of Dorico, without being time intensive? Just trying to see if there are any other quick fixes y’all would do to improve it, as the musicians are just going to use the file to get the general idea and will ignore any MIDI phrasing anyway.

(Obviously I couldn’t figure out how to embed those directly in the post or I would have. If anyone knows how to do that, I’ll go back and edit them.)

One issue for the relative timing of different instruments or sections is the physical arrangement of the orchestra. The speed of sound is roughly one foot per millisecond, so there is a measurable “latency” between the front and back of the orchestra from a normal listening position. In a live performance the players will compensate for this to some extent, and also for the fact that bass notes take longer to “speak” at full volume than treble notes.

However when samples are chopped up into individual notes and then reassembled, these timing variations can be lost.

The microphone positions for recording individual samples can also produce similar latency effects.

The balance of the direct and reverb sound levels also affects this.

This sort of detail can be important for making individual instruments be “heard” in a natural way in the ensemble. I suspect NotePerformer has some internal algorithm for this, but a sample player which doesn’t have a long “delay time” like NP doesn’t have the option of starting to play notes before the MIDI event that triggers them.

God if I know - they just do. Clarinet is my first instrument (still play it) so sat next to them for years. I haven’t heard a sampled WW section I didn’t like, being solo instruments they just sound good out of the box. Same with others, like solo violin, like I say it’s only when you section them it breaks down (to my ear at least). Thinking on it I ride the CC’s (either a FaderMaster Pro or Hornberg breath controller) just like a wind player would (being one), so maybe that helps. *

What specifically sounds bad about oboes for you? Could be how you’re using them - not meaning it’s wrong in any sense, but maybe some orchestrations bring out the problems more. I just used them as a balance against the string section which was internally doing some counterpoint thing, sounded great.

BBCSO is my favorite library, also have EWHO and Spitfire Studio Strings. BBC is a cohesive library that was recorded in, what’s the name, Mai Vaida or something? The old hall in London, anyhow it’s not bone dry like the Bill Putnam studios, and not crazy like Air. Something inbetween.

  • Something to consider, that wind column can’t change on a dime - it’s nothing like the keyboard you’re playing the notes in on. Playing LOUD--LOUD note to note, for example, is very hard. And generally wind players like to glide around and not do things abruptly, unless it’s musically called for. However don’t micro manage the CC’s, I see guys fiddling a lot with them and I’m not sure that’s doing anything other than filling up the MIDI. The main thing is to always think of the line being played - where’s the peak? Think where you want it to start, peak, and end at, that’s how the winds are thinking about it more than likely. And think intensity, not volume. Leading and trailing the beat is something we do along with the breath to achieve that. Ah! One more thought, when playing legato and you’re hitting the highest (intensity) note, the peak of the phrase, unless the score calls for it winds usually hit the actual peak mid-note, not begin-note, as a piano does.

Fascinating thread - I really hope we get some worked up examples we can study.

The issue is generally that people don’t normally do enough with the CC’s. I agree the important thing is to find the peak intensity of the phrase and make sure you are shaping with that in mind and not just randomly running CC’s up and down throughout the phrase, but I don’t think slight fluctuations aside from that are unhelpful. Part of the issue with the way the samples are recorded is that players get hyper-focused on individual notes, whereas in real music they are focused on playing phrases. In sample recordings, they are super focused on playing that one note on time, in tune, with good tone, and with a completely stable dynamic throughout, never getting the slightest bit louder or quieter. This is an extremely artificial situation compared to reality, and to get out of that, having continuous subtle CC shifts can be quite helpful. Simply chasing the line to the peak with a very linear movement (ex. drawn with the mouse, or played by a notation program, or by moving the fader up in a very consistent speed in the same direction) and then coming back from there still ends up sounding robotic, because there is not enough variation in levels in the rest of the phrase.

I missed this before, but agree 100% with this.

I don’t know whether Dorico caters for this, but in Cubase it’s possible to delay everything on track by a few milliseconds, or to advance it similarly. I find this helps when combining arco string passages with pizzicato instruments. It avoids the arco strings sounding a bit ‘lazy’.

From what I have seen, Dorico doesn’t have a method for this, but I agree it is really important. It isn’t only important for strings - certain libraries tend to have a “fast” and “slow” legato setting, and the “slow” legato works a lot better with slower moving, more broad lines. However, the “slow” legato generally means a late note if you just change notes right on the beat, so you have to make the note a bit early. It is possible to do this manually by bringing the onset of the note forward a bit so that they play a bit early all the time, but it would be nice if there was an adjustment for this to automate it more.

I almost always end up using EQ on at least some instruments. Spitfire Chamber strings violins, for example, can be excessively bright.

I try and minimize EQ as much as possible, myself, although I agree it has to be used sometimes. I typically EQ a notch out of the VSL Oboe around 2000Hz to make it a tad less nasal. I also use Spitfire Chamber Strings, and one thing there is to make sure not to take the CC1 up too high. You start getting into ff and fff range and the strings don’t normally play quite that loud, so sometimes the harshness experienced is really just due to the mapping between dynamic and CC1, where it hits the ff or fff level too early. LASS (LA Scoring Strings) has the same issue, but it is even more pronounced there.

i am here for the knowledge … very interesting thread indeed