Help with realistic mock-ups

It’s interesting to me that even in this excellent video on using Dorico to score films directly with BBCSO:

Roland at some point in the video acknowledges that while it should be possible to achieve DAW-like playback, in his examples even he actually started by adding expression (CC1, CC11, etc.) in Logic first, and then exporting the MIDI to Dorico before continuing. I similarly have found that it’s much easier to operate this way (even with Expression Maps and all the Play Mode functionality of Dorico) because it’s so much easier to manipulate the all-important expression parameters like CC1 and CC11 in Logic editing-wise.

Part of the problem I’ve found is the slight confusion I experience from the hierarchy of the “Dynamics Lane” in Dorico vs. (the somewhat hidden access to) the underlying CC11 and CC1-type data in Play Mode, and in figuring out which takes precedence to control expression at any given moment (depending upon what was entered first and how the notes were originally entered, etc.)…Automatic Expression Map-based Dorico playback requires this sort of additional expression tweaking I’ve found, and so you also end up having to negotiate both the Expression Map programming (to interpret the notated symbols), PLUS still tweak the underlying MIDI further…So still sorting this out…

  • D.D.

To clarify my earlier post, when I said the strings sound really good, I meant in terms of the samples, they sound much more like real strings vs. the NotePerformer ones, and quite nice. Everything needs much more shaping of course to get the emotion back. NotePerformer adds a lot of interpretation. Without that, if you just rely on the Dorico CC control to provide all expression, the virtual orchestra ends up playing in a rather timid, flat way, like they are afraid to get too quiet or loud.

From what I understand, the dynamics lane by itself does nothing. However, it does add CC data into the CC lanes, which you can override with your own data. Obviously if you override with your own customized data and you change a dynamic or something, changing that dynamic won’t have a playback effect anymore, because you have added manual CC data in the same passage.

Thanks for the encouraging words, David and Rich.

Here’s the original Dorico file. You can see I already had done what Grainer2001 had suggested, sort of: I added a second piano track and performed it in, pedaling and all (and played it through Keyscape, which I love). Anyone is welcome to monkey with the file as they wish (keep it CC BY-NC-4.0, please).

I do plan to return to this next week and try to incorporate both CC and reverb for greater realism (unfortunately this sort of thing is a “free time” pursuit for me, and I don’t have any more of that this week…! :sunglasses: )

That’s nicely expressed! It just gets confusing for me when I’ve possibly performed things in originally (moving the mod wheel or expression pedal, etc. for a particular sound), or if certain data (but not other data) was originally imported from Logic with CC data already attached, etc. (vs. other sections where “automatic interpretation of dynamics” is happening), and also remembering where to look to tweak (Dynamics lane vs. individual cc lane). I’m actually teaching a Dorico course right now and ultimately had to write down the different ways things seem to work just to remind myself, including whether a dynamic in the score was actually played back, etc.:

Record live with CC1 set to control playback of dynamics in instrument’s Expression Map: adding dynamics to score afterwards will not further affect notes that also have CC1 data recorded

Record live with velocity set to control playback of dynamics in instrument’s Expression Map: adding dynamics to score afterwards will scale velocity appropriately to the dynamic symbol

Step enter score with dynamic symbols added to the score - automatic interpretation according to playback settings of all dynamics

Edit the step-entered score by drawing in points in the Dynamic Lane in Play Mode - affects all dynamics played back correspondingly

Edit the step-entered score by drawing (or performing in) CC data - takes precedence for those moments over all other dynamics

OVERALL: adding dynamics ONLY in Play Mode will NOT change what’s already notated dynamics-wise in the actual notated score (perhaps needless to say)

  • D.D.

Thanks for sharing your workflow. The difficulty I’m having is that I don’t always begin by playing it in. If I’m orchestrating, it’s a combination of real-time entry, MIDI step input, and good ol’ numbers-and-letters.

I’m wondering if, before starting my “mock up” phase, I should remove all playback overrides so I’m starting with all tracks on equal footing.

But then it seems that some instruments, like Infinite Brass Trumpet, simply have to be played in. I played around with it this morning. Played live, it sounds really, really good, but realized from notation, it sounds synth-y and obnoxious.

I’m wondering if the need to create a second ‘playback’ track is a sign that you’re stretching Dorico to its limit and it might be cleaner to do the realisation in a DAW - for a full orchestral piece, that’s a lot of extra tracks! That reflects my own personal workflow, so I am biased in that direction of course.

@Richard, that’s what I’m starting to think as well…

Hi everyone, very interesting discussion here (but I am sorry I didn’t get the time to read in details all the post). This post I hope will add up some content to the community.

I think Dorico is superior in terms of efficiency, that is the time spent vs. the audio quality of the mockup. I used to use Cubase and then LogicPro and this has been always a headache in terms of productivity for those who write music instead of playing it / recording it live. So the followings does not apply to such “live” workflow. It you are writing music on a Daw (vs on Dorico) :

  1. you need to “humanize” each entry (automated in Dorico)
  2. you struggle with articulation to decide over which sample to use for a particular duration (automated with the Dorico expression maps)
  3. you need to jump from one track to another one to balance the overall dynamic and cresc. / descresc.of each tracks without a clear overall view (Shift+D and all dynamic / hairpins are setup at once with Dorico)
  4. you spent time playing with the CCs and drawing in most of the case linear segments moving up and down to articulate your phrase (and the hairpins of Dorico do the job perfectly - you can tweak a little the CCs in Dorico for micro-dymamics & vibrato adjustment especially for some long notes of the phrase)
  5. you struggle to enter glissando, tuplets, grace notes, etc. (this is automated in Dorico)

So Dorico is automating all the tedious works required by Daws. The only requirement is first to spend a significant amount of time programming your playback templates and expression maps (it took me about 10 hours for a full symphonic orchestra with Spitfire audio orchestral libraries).

And to be frank, at the en,d I am 10 times more productive on Dorico for a similar audio rendering at the end. And really I want to write music not spend hours programming a DAW. Check this example of a composition #madewithdorico in few hours: Franck D | Music Composition on Instagram: "« Hans » a music I started composing before my Covid I wish to finish soon, in the style of Hans Zimmer. A quick video made this evening with an excerpt. • • • • #songwritersofinstagram #lafilm #violas #stringinstruments #songwriterslife #wildlifefilmmaker #femaledirector #wildlifefilmmaking #diyfilmmaking #filmproject #youtubecreators #behindthescenes #grandpiano #violinista #musiccomposer #songwritingtime #filmmakersofinstagram #cellista #filmmakerslife #indiefilmmakers #mediacomposer #travelvideos #violist #onsetlife #instamusiciansdaily #pianista #steinway #gammer #indiefilmmaker #hanszimmer"

The only improvements I can see are three folds :

  1. There is only one send effect channel for the Reverb (so no secondary Reverbs not delay)
  2. Copy and paste notes does not paste CCs tweaks
  3. It is not possible to add audio sample

A good mockup is 1) good samples libraries 2) good reverb 3) smart use of articulations 4) proper dynamic articulation of the phrase 5) tweak of dynamic & vibrato of long notes

Excellent example! I’m really curious how you entered the notes originally - did you “perform” any of them in, or just use step entry? Did you use the Expression Maps to do ONLY automatic interpretation? Or, if you tweaked, did you use run into the issue of figuring out whether to tweak the Dynamics Lane (if you step-entered originally) or do directly edit the CC data (if you performed it originally), etc.? Just curious your workflow for this piece. Thanks for sharing -

  • D.D.

Hi @robjohn9999 ! My answers hereafter :

Excellent example! I’m really curious how you entered the notes originally - did you “perform” any of them in, or just use step entry?

Thank you ! I write music, that is entering note by note on Dorico with the edit function (Enter+duration+alteration+A/B/C/D/E/F/G). I do not perform nor use step entry. I do not use a midi keyboard, just using my MacBook Pro.

Did you use the Expression Maps to do ONLY automatic interpretation?

I spend quite a time working on my expression maps, especially the duration to articulation functionality offered by Dorico : this is tedious but critical. I made an adjustment to the velocity curves as well to match my “taste” of what is the difference between ppp,pp,p,mp,mf,f,ff,fff. So I can at any time check the rendering of my composition when writing it and make adjustments.

Or, if you tweaked, did you use run into the issue of figuring out whether to tweak the Dynamics Lane (if you step-entered originally) or do directly edit the CC data (if you performed it originally), etc.?

The given example has no CCs editing, just using the dynamic markings and the hairpins

Just curious your workflow for this piece.

I devise a motive, then build a phrase based on this motive using classical rules of composition (see. William E. Caplin Classical Forms - the Best Buy for a composer). Then I decide over thickness of the phrase and orchestrate the accents, decide over the middleground and background (ostinato, basses, etc.) and orchestrate them. Then I create & orchestrate variations. The best orchestration courses can be found on ScoreClub.net : it is, with William E. Caplin, an absolute Must Buy for a composer)

Take care

If you start from a score I can see that using Dorico is much more productive to produce a mock-up. I guess the question is : if you want to produce a piece to be used, eg as a film or TV score, is it possible to do that in Dorico? I don’t know the answer to that to be honest as I’ve not tried to do it in Dorico. But there are advantages to using a DAW - you have many more sound-sculpting tools, a fully fledged mixer, group channels, flexible quantise, more powerful midi editing, proper metering, unlimited effects, rendering of tracks to reduce CPU etc. If you play your parts in, as I mostly do, you have dynamics built in, though they will need tweaking.

Looking at film and TV music, I don’t think it makes sense to use Dorico to deliver the finished audio.

For TV, and short films or independent films, most of the scoring is done with virtual instruments, so you don’t really even need a notated score in most cases. Also, in the present day, there are lot more synthesized elements - drum loops, sound design stuff, synthesized sounds. You would be using the “Play” mode as sortof a DAW substitute and never really going into write or engrave mode. It doesn’t make sense to me to use Dorico in that way, because the Play mode toolset is certainly inferior to the toolset in Cubase, and so it is more work to get the same sort of result. When you have to get scoring work done, it has to get done quickly, so you need a UI that is efficient enough. Also in Dorico you lack a lot of things such as group tracks and so the routing isn’t as flexible, and not being able to surround mixes etc. is a limitation. For this sort of work you need to be able to deliver stems, and have good control over automation and integration of audio tracks possibly.

Occasionally, TV and short films and independent films may have a few live musicians mixed with the samples, but it would generally be less work to export the MIDI or MusicXML from the DAW and import into Dorico to notate those few passages vs. writing the entire thing in Dorico.

For big-budget films, the composers split into two groups. First, you have the composers who started in the 80’s who worked on paper before and they still prefer to write via sketching directly on paper. That sketching process can be brought into Dorico (as in the case of Alan Silvestri), and then the composer gives that to the orchestrator who creates the fleshed-out score with all of the parts. That fleshed-out score would then be handed to somebody to create a mockup in a DAW, and that person would play in all of the lines by hand into the DAW following the score, shaping all of the CC’s.

Second, you have the composers who work in a DAW to begin with. Those composers may do so because they make extensive use of sound design elements like Hans Zimmer, or they may come from more of a pop music background like Danny Elfman and not really be super comfortable with notation. In those cases, the composer creates the initial version in the DAW, which the orchestrator takes and transfers/translates to notation software to create the actual performance score and parts. There can be certain challenges there where the samples the composer uses would not really translate to a live performance scenario, so they would have to determine whether to adjust the orchestration in a way that has a similar effect, or to use mixing after-the-fact to work around the issue and maintain the original orchestration. I suspect part of the reason that big film orchestrations often have some crazy number of horns like 12 or 18 horns stems from some composer who played triads with a 4 or 6 horns patch into a DAW and the orchestrator ended up having to get 12 or 18 horns to get those triads to sound the same.

A little off-topic, but here’s a screencast of the library I purchased, performed by the creator. UN. REAL.

Very impressive - but I would say don’t get too caught up in any one library. New and better things keep coming out, and there is a lot of work that went into that mockup besides the samples themselves.

I’ve gone through about 15 years of new libraries coming out that everybody says are absolutely amazing and blow everything else out of the water. Often they aren’t actually that much better than what came before.

That’s fair, but the most compelling videos are the ones where he just plays it live, no funny business. And I’ve played it myself over the past several days. I detest the “latest-and-greatest sample library” craze as much as anyone, but this one really is quite good. Just ride the fader with the left hand a little as you play, and it’s astonishing.

Now to figure out how to get it to play back a notated score that well…

I’ve been a big fan of Sample Modeling brass, which has a similar idea to IB where it is dry and allows for a lot of customization of the performance. I think it is more important to have control over the performance than anything else, even if it means more work for the person doing the mockup. I’m interested in IB too myself, but I don’t necessarily see it as a huge leap above the Sample Modeling instruments I currently use.

it’s such a simple and obvious idea and I wonder why I didn’t already try something like this! Probably, as you say, it would simply be too untidy and unwieldy for full orchestra and I wonder also how practical it would be for something like a string quartet where instruments are usually set to respond to CC’s. If you’re going to record live, you’ll either have to to move controllers in real time or (easier in a DAW) overdub later. For piano or other velocity instruments, it’s going to be easier. Still, I agree this is probably the easiest way to get realistic performances.

other than the weird trumpets which are probably beyond repair, I think the second rendering has more potential than the first, though as it is, seems a bit subdued. Unfortunately, not knowing the libraries involved, I’ll need to leave it to others to be more concrete. With NotePerformer, on the other hand, I’ve always found it essential to trim the vibrato level in strings in general and particularly with smoother music like this, otherwise they sound ghastly. Around 30 should work here.

Just for fun, I took this and loaded up my Cubase template into Dorico to have this play back with my set of sample libraries. I host all of my instruments in Vienna Ensemble Pro in Cubase and also load my stage positioning reverb in there, so I was able to bring everything over to Dorico.

I have not added any shaping to yours. Like yours, it sounds really flat like the players are timid, so this does not properly demonstrate what the libraries are capable of:

Only the piano really shines, but that is because you played that in. Dorico is keeping the modwheel for the rest of the instruments between 40-60 at all times which is super constrained, so the shaping is very minimal. The phrasing is also way off, notes suddenly being cut off at full volume instead of tapering off. There are also occasional notes that are obviously much louder than they should be, particularly in the strings. I noticed one or two notes that suddenly pop out and they are obviously not supposed to.

Shaping is badly needed, in both cases, because Dorico is keeping all of the dynamics very constrained in a narrow range, and not factoring in the phrasing when it comes to the shaping - it is mostly just following the hairpins and dynamic indications and treating them quite literally.

Here’s a mock-up I’ve done using Cubase and spitfire chamber strings, based on a Dorico score.

Any comments or suggestions for improvement welcome

Rich