My Journey Printing Stems from Dorico to Pro Tools

The Challenge:

In my final semester of grad school, our Scoring class re-scored a feature-length film that our instructor had been the original composer for. Me being me, a man who loves Dorico and hates piano rolls, I decided to battle-test Dorico to meet the following demands:

  • ~4-6 minutes of composed and fully mocked-up lightly hybrid orchestral music delivered weekly before midnight
  • Delivery involved 14 BWF Stems (i.e. High Winds, Low Winds, Perc, Synths) all with the correct timecode embedded and named according to specific filename guidelines
  • Among the stems must be a printed click track that follows the tempo map

The Problem:

While Dorico can export individual instruments, it does so in serial (i.e. flute 1 beginning to end, then flute 2 beginning to end etc.) This becomes a problem when, like me, youā€™re using a VEPro server on another machine it has to do so in real-time. A three-minute cue with 60 individual instruments/sections would take 3 hours to bounce and then would still have to be combined into stems in Pro Tools. With multiple classes during the week, all with their own writing assignments, this was time I did not have especially with a midnight delivery deadline.

Options I considered were:

  • Soloing and bouncing returns from VEPro that had been organized to follow my stemming scheme and then renaming after every bounce (too time-consuming, a lot of margin for error in losing track of which stem was which)
  • Creating custom layouts in Dorico corresponding to stems and exporting (still too time-consuming, custom layouts could not be selected from audio export menu and would still require bouncing and renaming)

Ultimately, what I really needed was to find a way to print audio directly from Dorico into a DAW (in this case Pro Tools). More on that later.

Existing Setup:

By this point, I had a pretty solid Dorico and VEPro setup going using CSSeries as my core orchestral library with expression maps I had made and refined over time to best suit my needs with per articulation delays and everything. My VEPro template did all the heavy lifting of processing, routing reverb, doing buss effects etc and my returns in Dorico were just my standard stems. I use a Wacom graphic design tablet for drawing in my CC information and make full use of a lot of Doricoā€™s awesome tools for batch inputting CC values for multiple instruments and nudging things around. My classmates were frequently impressed with my mock-ups especially considering they were done in a notation program and I tend to write in a way that strains what samples can convincingly pull off.

By futzing with the ā€œnumber of audio outputs in the mixerā€ function in the endpoint setup in Dorico my returns in the mixer menu look like this:

This is what my destination Pro Tools session looked like:

Extra Tools:

In order to accomplish my goal of printing directly to Pro Tools I needed a few extra tools. First, I needed Dorico to generate SMPTE that Pro Tools could follow using MTC. I used a tool thatā€™s been brought up in this forum quite often which is TXL Timecode.

Next, I needed something that could pull audio from each return in Dorico and send it to a corresponding track in Pro Tools. Since Dorico does not have multi-out capabilities as of time of writing, I needed to route at the insert level. Unfortunately, at the time at least, I could not find any free software that could do what I needed it to do so I used the following tools from Audiomovers.

Inject $50 USD- A plugin insert that allows you to route to i/o in a variety of different playback engines

Omnibus $200 USD- I probably couldā€™ve gotten away with just using Pro Tools Audio Bridge, but I wanted to use this for other stuff as well. Basically this allows you a lot of flexibility in how signals are routed from various playback engines.

How did it go?

Iā€™m not going to get into detail about the nitty-gritty of all the trial and error to get this to do basically what I wanted it to except to shout out Serge at Audiomovers. I had an inital issue that Inject was not remembering the output assignments when I would close and re-open Dorico. Serge got back to me really quickly with a previous version of Inject that did remember the output assignments and basically saved this whole thing.

The results were mixed but ultimately saved me A LOT of time and got me through the project.

One of the things that ended up being a ā€œtime suckā€ was that the inject plugin basically needs to be ā€œwoken upā€ by having audio pass through it while the plugin window is open. Before I printed I had to open each plugin and play some audio otherwise nothing would end up in Pro Tools. Something that made this harder is the fact that Dorico does not augment the plugin header name based on track name:

Another problem is that very quickly, the sync between Dorico, TXL and Pro Tools became unusable. I had structured all of my Dorico and Pro Tools sessions to correspond to each reel of the film, using Flows to be each cue in the reel. Things started to get wonky with multiple flows at different timecode offsets, TXL would no longer match the timecode that was synced between Dorico and the embedded timecode of the video file. In addition, aligning the tempo track MIDI export that was imported into Pro Tools became dicey because the lag time between when I would start playback in Dorico and Pro Tools would ā€œcatch onā€ to start recording meant that it was hart to align the tempo track which always contained a 2 empty measures. Ultimately, I ended up having to abandon printing and syncing at the same time. I placed a transient snare in beat 1 of measure 1, gave Pro Tools a running start and hit play on Dorico. I could go back to Pro Tools, tab to transient, splice, and then using Spot move the transient to the exact timecode position, delete it, and then reconsolidate the clip region. Ultimately, that process didnā€™t take that much time but it wouldā€™ve been nice to have it just print in place and not have to worry about it.

What could have helped? (the low-hanging fruit)

CAVEAT I say low-hanging fruit from my perspective as a user and not a developer of this software. Ultimately, I donā€™t really know how relatively easy or hard these things are to implement but this is just from my best educated guess as a user with some tech savvy.

  • Native MTC i/o support: I feel like Dorico should DEFINITELY have a way to send and receive MTC directly tied to its own clock not relying on any 3rd party software, I think this would open a lot of doors for DAW integration, especially when working with recorded audio, sound design, phrase libraries etc in the DAW (things that donā€™t make sense to notate). Iā€™m currently working on a project right now where Iā€™m only using Dorico to work out horn section parts and the rest is just guitar, bass, drums etc. that Iā€™m recording into Pro Tools and this is hard to deal with using buffer sizes and whatnot and the stop/start of the play-head tends to be very frustrating to deal with
  • More Bouncing Options: It would definitely go a long way to be able to create custom bounce groups that could correspond to your stems. Even just allowing custom score layouts to be selected in the audio export window that could be uniquely named and exported all at once. A three-minute cue exporting real-time 14 stems would take 40 min as opposed to 3 hours.
  • Multi-Out: This would be the holy grail and would cut down on stacking buffer sizes. If the user could just output their returns directly into a DAW this would open up a lot of doors for use of Dorico in the film music world. In addition, more routing options in the mixer would be incredibly beneficial

What could have helped? (Pie in the sky)

  • Articulation Override: In the Key-Editor, it would be really helpful to be able to override the playing technique/articulation triggered without affecting the score. Often I am using a different articulation in my samples than I would write on the score for a player, for example, if the fast legato isnā€™t cutting it, I may use the Marcato patch. This would save a lot of time when working on a project that needs to be fully mocked up for client approval and will eventually be engraved for a recording session.
  • MIDI ā€œclonesā€: This is two-fold
    • An aX system similar to NPPE whereby the if/then statement is something like ā€œif 4 French Horns are playing a unison line this will route to the a4 patch of your sample libraryā€
    • Something that allows you to blend in different libraries for the same instrument without cluttering up the score. Basically using a master MIDI reference that will output the same notes and maybe CC information using independent routing and exp maps. In my mind it looks something like this, but perhaps this could be best offloaded to Cubase with some sort of synchronization or ā€œMIDI streamā€:

I understand this may create some tension between the people who want more playback features and the purely engraver crowd who are frustrated with what they perceive as software bloat. But perhaps having two modes, sort of like how Logic Pro X had a kind of ā€œGarage Band Modeā€ with simplified vs. complex play sections could be a solution?

I look forward to hearing everyoneā€™s thoughts and if anyone thought there might have been a better way to do this for future projects! I really believe Dorico can be a fully capable notation and mock-up software that can meet the high demands of the media music world!

7 Likes

Hi @Johnstakovich,
This is a very interesting post. Iā€™m trying to sync a single flow in Dorico with ProTools. I wonder if you can post a quick screenshot of your TXL Timecode window - I have previously managed to sync Dorico with Cubase using the IAC Bus, but you mentioned using SMPTE, so Iā€™m not sure if I am using the TXL correctly. Secondly, what are the settings in ProTools? I have set the MTC reader in the ProTools Synchronization tab inside Peripherals to read from the IAC Bus, but nothing seems to connect between the two apps. ProTools just says: ā€˜waiting for syncā€™. Thank you for any pointers!

EDIT: Well, a little more playing about, and Iā€™ve got it working. It seems it needed to be on 25 FPS and not 24 FPS. There is a start offset that I can deal with - I found the same thing with Cubase. In Dorico, measure 10 seems to start Cubase/ProTools at measure 1. I think I was able to set a bar offset in Cubase, but so far I havenā€™t figured that out in ProTools. In any case, I can work with this - It seems quite robust even if I put tempo changes in. I can use Loopback to route the audio. It really would be nice if Dorico generated MTC natively, or even was able to receive MTC and run as a slave. Iā€™m sure this will happen at some point.

Iā€™m sad I missed this thread earlier. In fact I think I missed some keys points on previous threads of yours. (I read them but they spawned good discussion in different directions) I have not picked up on too many others also looking for better ways of printing stems in a Dorico workflow. For now at least I chose a form of one of your other options, approaching some of the problems you mentioned differently.

I donā€™t want to litigate the use of VEP or DAW but I think I should mention how they were a factor in my own choice. Avoiding the need for real time rendering changes the time calculation and I think you have to weigh all the time factors when you decide.

By using an automated external script, the time required for renaming files and the error potential you mentioned under the first option are basically zero for me.

Iā€™m acknowledging up front that it takes time to bounce individual instruments, so Iā€™m giving some details because I think itā€™s worth it, but Iā€™m keeping my mind and ears open.

Rules in the script automatically group instruments or layers into stems. That had the additional benefit of enabling me to define simultaneous alt versions on the fly. Ex: by using a comment or text in the score, the script makes an alt stem where the oboes are down by 3db, recognizes that youā€™d like an alt where the trombones DONā€™T double the horns in that one section.

In this context, their use for me is that of potentially avoid having to bounce again, the time it would take to make them serially. A hedge against last minute fire drills when your ears are tired and youā€™re questioning yourself, maybe wake up and wonder what you were smoking, etc.

I donā€™t think I need a DAW typically just for stems so thatā€™s also a factor in the way I look at the end to end time savings. Full transparency- I very much fight for a result that Iā€™m happy with, it just that I think the issue is always my skill, sometimes a library, and rarely is it Dorico that is limiting factor. Seems there are always times in any kind of app integrations when something gets cranky - unusually when I can least afford it and thatā€™s a time factor too.

I use midi regions for the sort of articulation overrides you mentioned and they arenā€™t visible clutter for live players. I bless whoever designed and built them. I disagree that itā€™s better to keep synths or sound design in a DAW, though at first I disliked them lot - I found them distracting and space wasting until I started representing them in different ways in the score. I donā€™t especially like using independent voices, but thatā€™s my clean solution for the a4 type changes you mentioned. It handles what you asked about changing maps and for the most part donā€™t add any visible clutter either. Honestly? My clumsiness issue with that feature is really that I foolishly havenā€™t just paused and really taken the time to learn the best way/ shortcuts etc. to work with it. Thereā€™s always something more to learn, you know ?

By using an automated external script, the time required for renaming files and the error potential you mentioned under the first option are basically zero for me.

What are you using for the scripting? Iā€™ve been wanting to get more into scripting for Dorico but Iā€™m unsure where to start. Where is the script in the chain? Are you bouncing out individual instruments from Dorico?

I use midi regions for the sort of articulation overrides you mentioned and they arenā€™t visible clutter for live players. I bless whoever designed and built them.

Interesting, I wish the MIDI regions allowed for connecting to existing expression maps. Cause part of the thing for me there is that my expression maps all have per-articulation delay and theyā€™re based off of CC values not key-switches. I think I could still trigger with a key-switch but it would definitely be preferential to be able to point to the existing expression map for that purpose.

Yeah , that wasnā€™t my clearest post, my apologies.

I used python. The script monitors your export directory for changes, and runs automatically about a minute after all your exports finish. It runs in the background separate from Dorico.

It works with what you give it. If you output stems as you are now and use the script mostly for renaming- it wonā€™t complain.

Exporting every instrument will give you the most flexibility to creat alts, but you may not care as much about that. The feature was originally intended to deliver all the different versions and cutdowns required by a music library, and ultimately to increase the chances of a piece being used.

Just as a point of reference, I exported 32 tracks with 4:21 of music from Dorico in 63 minutes today. It would have been 134 minutes or so if it had to be done in realtime. I definitely see the advantage of your VEP setup exporting only 14. What hurts is that would still take around 56 minutes with VEP in realtime to output those 14 tracksā€¦

EDIT

I did a quick test with VEP running locally with Dorico. Simple setup, but locally itā€™s 2.17 times faster than realtime. A 14 track stem bounce returned to Dorico like what you described, with 3 minutes of music ought to be around 24 minutes to being completely ready to deliver, using scripted renaming and no DAW involved at all. Iā€™m gonna do a full up test first, but I think thatā€™s a pretty good direction.