Request: Keyboard/Synthesizer orchestration tools (especially for musical theater)

Let me start by saying that as the music teams for Broadway musicals move away from Finale, a feature like this is going to be a huge selling point as music teams consider what to switch to.

When orchestrating for modern Broadway-style orchestras, there is a need for a way to assign sampled sounds to arbitrary places in a score. This can be done manually just using text, but that doesn’t help with playback. The way I handle this currently is to have 2 players. One player reflects what’s going to be on the page for the player, for which I disable playback. The other player does not appear in the score, but it contains every instrument the keyboard will be playing. The notes I want to hear are entered into the separate staves assigned to this player. Then I filter those staves out in Galley View and I can both see and hear what I’m doing. This approach becomes increasingly untenable as the number of instruments grows.

My suggestion would be to create a new system for keyboard parts somewhat like the percussion kit creator. The workflow could be something like…

  1. Make a list of patches that the keyboard is going to play in the kit designer.
  2. Write out the part
  3. Select the notes you want to assign to a particular sound
  4. Use a popup (let’s say ALT+SHIFT+P since it’s related to playing techniques) to then assign one of the keyboard sounds to those notes.
  5. Text for the patch would appear in a box at the moment in question. If you wanted to layer multiple sounds on a single pitch, you could add more than one and the text box could update to show this automatically.
  6. You could optionally show the range of notes that the patch applies to as part of the sound.

Simplifying this process of building synth books would be a game changer for theater orchestrators. I’d recommend studying some real scores with complex keyboard parts, as well as working with theater orchestrators and the big Broadway copyists like Emily Grishman or Russ Anixer.

I would love to see how the Dorico team tackles this problem!

2 Likes

Thanks for the feature request, Jesse. You can already to some degree achieve a result like this using independent voice playback, assigning each voice to a different patch, and then writing the relevant passages in the target voice corresponding to the sound you want to hear. But this is a bit of a faff, of course, and it wouldn’t produce the expected patch change labels in the score.

Another option would be to assign multiple instruments to the same player, and rely on instrument changes to produce the changes in sound. This would have the advantage of showing the expected patch change labels, because the instrument change labels would achieve that (you could rename each new keyboard instrument as Strings or Brass or Lead or whatever). But the disadvantage is that both staves of the instrument would have to change to the new sound at the same time.

Thanks for the reply! Those are both good solutions for certain situations for sure. The other solution that I’m seeing other people using after talking about this in a thread on the Facebook group: Creating synth setups in MainStage (since they’ll probably be programmed there anyway) and sending MIDI out from Dorico. Then it’s just text labels and you’re getting the sound you want in MainStage. Then they use MIDI triggers to send patch changes from Dorico. It’s another good workaround. I’m looking into specifics.

I have a question and maybe suggestion.

Whatever show control software they’re using ( lights, scenery, special effects, media, etc.) it will include a facility for handling audio cues. I feel like there is every reason for a crew to want to use their system, their way. So…

If I understood you, some of what you describe sound like the sort of audio cues that you’d want to keep separate from the keyboards, and have a way to give them to the technical crew where they will end up with the rest of the show’s programming? The best method will be up to their choice of software, but they aren’t likely to prefer reading notes on a staff or messing with synths or midi I think. Giving them a clip they can drop in would beat patches all day long.

As far as playback in Dorico, there are vst sample players that let you load up a number of clips. That’s your list I’m suggesting. I think I would use one single line percussion part in the score for all of them; triggering them at the appropriate time with explanatory text above.

I think I’d try to use separate short flows to create/record the clips as may be required, and just use the player in the main score. I don’t see that as a hack, but a way to deliver more value. I could definitely be crazy though.

Hey Gregory – I’m not talking about using Dorico to trigger anything in a live theatrical context. And as you suggest, there’s a lot of automation that goes into live theater now. That’s usually based on common timecode that’s synced between departments and musical audio clips are usually handled by Ableton and triggered by a conductor along with a click to make sure the pit stays in time. That’s not what I’m talking about here.

I’m talking about the trend for smaller and smaller pit orchestras and therefore more and more complicated synth books that are actually playing virtual musical instruments. So for a given, say, 4-bar phrase in a show like The Book of Mormon, a keyboard book might have a layered timpani and tuba with a glock doubling 3 octaves above in the low register of the left hand, a horn line in the middle register, a clarinet in the right hand and a high string pad being held by one finger at the top. Then you advance the patch and get a whole different set of sounds.

It might be silly to try and do this with playback in a notation program and I acknowledge that. But I would love for the team to think it through and decide that it’s silly, if that makes sense. It’s a niche need but it’s a large niche. There are a lot of musicians who spend a lot of money on notation software to facilitate this stuff and this feature could be a great thing if it’s done right, IMO.

Currently, an orchestrator just says what he wants the instruments to be in the score and a keyboard programmer creates synth patches (usually in Mainstage) that contain those sounds. But when building the score, the orchestrator has to mute the keyboards if they want to use playback since the sounds they’re writing for won’t be heard. This would be a solution for that.

Or maybe I’m screaming into the void and nobody wants this. But I suspect they do. And I’d love for the Dorico team to do some research and find out.

1 Like