Help with realistic mock-ups

I’ve heard that said before, and while I don’t have insight into how most people program (not having watched them done it and hear the results) I think there’s a better way.

Here’s my best advice, rather than relying on somewhat artificial technical directions (use CC’s, don’t use CC’s, do this or that as I said above) instead load up your DAW with sections and work with them. Playing samples is just like playing an instrument, you need to hook the ear up to the hands. My recommendation, if you have a mac, is to use MainStage and setup templates with a split keyboard. Here’s an example (ping me on the forum if you want to know how to set this up)

It’s rather clever, it has an algorithm that dynamically changes the splitting based on how you’re playing it, by extending a range as you’re working in it. For example if you are playing the basses in one hand, as you go up it will extend the bass split, until switching over to celli at some point you set. It works better with a single split, doing what I’m doing here can be hit-or-miss sometimes, but it works well enough for the purpose. Do this for your winds, brass and percussion. Then just improve around and fiddle with all of your controls to practice being a sample musician. Practice!

Because it depends greatly on your sample library too. I tend to stick with one main library, I can’t claim to play them all well. And don’t forget the mics and other mixing techniques too!

I don’t know if the delay/anticipation has to be applied to a whole track. But if you only have to do it on a single phrase, maybe to most phrases in a section for a style matter, Dorico allows one to select the notes in that phrase, and move them of the smallest distance. I don’t know the minimum resolution, but if you zoom in very much in Play mode, it seems that you can do it with very small units.


My goodness, the real recording is 150 million times better than your computer is outputting.

1 Like

LOL, yep, that’s why I’m following this thread! (Also having Steve Wilson playing lead on soprano and Ron Blake taking the tenor solo helps a bit too)

I’m finding there are some serious limitations at the moment in this regard.

I think I’m still going to look for a way to play these changes in and stay in Dorico from start to finish. Maybe I should just give up and switch over to a DAW…

Dorico will never be as good as a DAW obviously, and until we get a live link connection to Cubase, or ideally others like Logic too (unlikely) you’ll have to put up with Dorico, which is really meant as a notation tool.

Fortunately there’s a not complicated workaround. I spelled out how to do this with Logic on Mac, if you take a look at my post history you’ll see it in the last month or so. Simply send all your MIDI over to Logic using the IAC driver, then ‘print’ the MIDI from your score when done, and then in Logic do all your performance twiddling. Should work as well on other DAW’s. If you’re on Windows there’s probably equivalent approach.

Is that different than just exporting the MIDI from Dorico? I imagine I’d just then humanize it in Studio One (my preferred DAW).

#metoo :slight_smile:

Dan, as in Paul’s response in the other thread, there are things that Dorico does already, and that the developer of the sound library simply seems to ignore. You can already have a degree of humanization/smart randomization of the playback, and you can see it in the Velocity and CC1 lanes.

No longer being forced to exporting to a DAW for realistic playback is exactly what Dorico could finally avoid, and is probably already avoiding for most tasks.


I beg to disagree on this point. Or, at least, I would see it under a different perspective.

First point is that the MIDI editing part of Dorico is already very powerful. There are still many things to do, but it already allows for deep fine-tuning of the playback data. Many things are made in a more “musical” way than in a DAW (like choosing articulations or creating dynamic variations).

The second point involves considering what “notation” is today. We have probably grown in a world where the composer composed with pencil and paper (I’ve always used rollers), the copyist put it in Finale (or in better handwriting), and the publisher printed, promoted and sold it.

Times are very different, now. A composer is the main responsible for the final look of the score, but is also asked to let a jury, impresario or publisher listen to a prototype of the score. Architects have been asked for this for thousand years. And composers have always had to submit a mockup of their orchestral scores at the piano (either 4-hand or two pianos).

Simply graphical notation would no longer suffice, if not for personal use. Notation+everything needed to promote one own’s work is the current requirement.


Hi Dan, I have a mockup of my own I’d like some feedback on, in the spirit of this thread. Would you prefer I started a new thread for that?

No, please post it here. I’d appreciate learning from it as well. Thanks.

Here’s the PDF of an orchestration I recently completed.

Here’s the NotePerformer demo, m.15-83:

I didn’t do anything to it, just boosted by about 1.5 dB in Audacity. (I omitted the piano)

Here’s an audio demo using Infinite Winds, Infinite Brass, and Cinematic Studio Strings:

The audio levels were really low, so the brass and winds are boosted significantly. Strings are boosted as well. I have Vahalla reverb but didn’t use it here, just wanted to start with a sort of “default” recording. I did make a few changes to the settings in the Infinite instruments in Kontakt:

  • limiting the dynamic range of the brass and winds
  • set attacks to 0, so velocity followed the score
  • trumpets sounded terrible by default, so I increased the ambient micing for them and just lowered their volume a little.

I know a lot of this process is learning to “play” the particular VSTs you’re using. I like Infinite a lot, but its creator has said it’s intended for playing in, not notation. However, I think there’s more I can do to make it at least as good as NotePerformer, maybe better.

Timing humanization is set to 40%. Some of the instruments I had played in via MIDI record, which is where Infinite shines, and some of them were written in.

I’m going to start recording in some CC changes next, as well as some reverb, but I’m posting this as an initial comparison.

Oh, and the CS Strings were waaaaay delayed because of their legato patch, so I had to put a latency compensator on that channel.

Feedback welcome!

It doesn’t sound bad - the strings in particular sound really good. The biggest issues to me right now are that everything is not sounding like it is in the same room, so the entire sound doesn’t really “gel”. The reverb will help with that. The sense of depth is missing, so it sounds like everybody is the same distance from the microphones. Also, the brass is too quiet, and the winds are a bit too quiet compared to the strings. It is hard to comment on how they sound because they can barely be heard in places. As an arrangement thing, you might also want to check your voicings - a few times I heard these chords that I think you wanted to sound full but were missing the third. The piano probably supplies it, but you will get a better sound if you work it into the orchestra as well.

I loved the orchestration Dan. I’d happily listen to that again and again.

I heard the NP version first and then the Infinite / Cinema version immediately afterwards. I fully agree with mudcharme’s comments expect for one - for me the Cinema strings lacked the character that they had in Note Performer. The most obvious problem with the I/C version was balance, I wasn’t ever really convinced I was listening to an ensemble playing together. The character that was there in NP version (I found it stirring) was also lacking in the I/C version. I would suggest that most of the tips being discussed in this thread are really aimed at dealing with both of those two issues - balance and character. None of these comments are in any way criticisms, I’m full of admiration for you trying to work your way to a solution on this.

As I said in an earlier post, I’d be interested in studying examples of files that take the tips suggested in this thread and show how they can work. Would it be worth your while posting the dorico file of “I Will Sing” her so that others, with different libraries, might want to show how they would tackle it - or parts of it?

PS - My wife has just stuck her head round the door and said “that was a lovely piece of music …”

Hi all,

as Dan has kindly allowed me to share his thread, here’s a mockup of a chamber strings piece that I would really like some feedback on. It’s called ‘Eyam’ (pronounced ‘Eem’). As some of you may know, Eyam is a place in Derbyshire Uk that took the decision, when there was a plague outbreak in the 17th Century, to completely isolate itself from other villages to stop the disease spreading. The piece tries to portray their changing emotions as they confront the reality of their situation and decide what to do.

Here’s the score

Here’s the Cubase mockup rendered to audio

I have EQ’d the upper strings to remove some harshness and added some overall reverb. Unfortunately I don’t seem to be able to control the bowing on the solo violin that takes the melody at around 5.30.

All comments welcome!


I feel like this could be a terribly dumb question, but my default is always set to “silent” and I usually make recordings of my music manually elsewhere. As such, I’ve played around with Dorico’s renderings very little, and even then, just with note performer. That said, what do you mean by “played in” the notes? Do you mean that you recorded them in with the metronome so it’s retaining some midi data from your keyboard while parsing out the notation, or that you did the notation and went back and overdubbed with a human performance? If it’s the latter, I would love to do this too. If that feature isn’t available yet, I’d love to make it a FR.

A little of both. I’ve started orchestrating more by playing in lines to the metronome. I find it goes much more quickly, and I get phrases that sound much more natural to the instrument. In those cases, the MIDI data is retained.

The difficulty is when the resulting notation isn’t entirely correct, and I have to clean it up. I agree, I’d love to be able to overdub a human performance on top of the notated one.

You can actually do this, although with a big score it might get a little cumbersome. Certainly for a smaller number of instruments, it can work well - especially for piano renderings. Simply create an identical instrument. Record into the duplicate staff (They can share the same midi channel if you want)…put the notation into the top staff and use ‘Staff Visibility’ to hide your ‘live’ performance. Hidden staves still playback! You can ‘Suppress Playback’ on the other staff. If you use the same midi channel, you can mix and match between sections that you want to play in live, and sections that you are happy with when played back from the notation only.

The other element that can be useful on a solo instrument staff is ‘Create Staff Below’. You can’t record directly into such a staff, but you can copy material onto it, suppress playback on your main staff and then use ‘Remove Staff’ to visually remove your recorded material - It will still playback! This is not as flexible as the first method, but can be useful in certain situations.

This is the only way to get really realistic performances. Quantizing and other note editing can adversely affect the notated aspect of a recording, so basically just split it out into two tracks and and it will make life much easier!

It’s a lovely piece Dan. The Noteperformer demo seems to play the phrases better, and the Infinite / Cinematic strings demo has in most cases a better instrumental sound (apart from the trumpets, where I prefer the NP version). The low arco strings have the same characteristic mine always have, even with different sample libraries - they sound a little crude and clumsy. I wish I had a solution to offer here but I don’t! The infinite sound also could benefit from some integration between the instruments - reverb could certainly help achieve that. In any event, it’s a very moving piece and I really enjoyed it