Request for improved multitimbral support within single instruments (organ, synthesizers, etc.)

Dear Dorico team,

First, thank you for your remarkable work. I’m a composer and long-time Finale user now transitioning to Dorico, and I greatly appreciate your attention to user feedback, your elegant architecture, and the excellent documentation (especially in French, which is rare and very, very helpful).
I understand and respect the core philosophy of Dorico — the clear mapping of player → instrument → sound (→ MIDI channel), which serves traditional notation and publishing standards very well. However, I’d like to point out a conceptual limitation that becomes significant in certain contemporary and hybrid compositional contexts.

The issue

Some instruments are inherently multitimbral, meaning that a single performer is expected to control multiple timbres simultaneously. Examples include:

  • Organs, with manuals and pedals playing independent voices or registrations.
  • Synthesizers, samplers, or custom-built electronic instruments with layered or split sounds.
  • Electroacoustic or experimental setups, where different voices trigger different patches or signal paths (e.g., in Reaktor, Usine, or modular environments).
  • Extended instrumental practices, where a single player is controlling multiple sonic identities, sometimes across MIDI channels.

In Finale, this was manageable via four independent layers, each assignable to its own MIDI channel, allowing precise polyphonic and multitimbral control within one staff. In Dorico, I understand that layers have been replaced with a more flexible voice model — which is excellent for engraving — but there’s currently no way to route voices within a single instrument to different MIDI channels.

What would be helpful

Without departing from Dorico’s elegant model, here are some things that would significantly improve usability for multitimbral instruments:

  • Allowing voice-specific MIDI channel routing (perhaps from the Play Mode or via expression maps).
  • Or introducing the idea of “sub-instruments” or “voice timbres” inside a single instrument, each with its own channel, but grouped under a single player.
  • Or simply offering a way to make MIDI output channels configurable per voice, similar to what Cubase allows within a single MIDI region.

Additionally — and crucially — there should be a way to visually reflect this in the score, as a registration change, patch name, or other marker for the performer. Even if MIDI output were more flexible, it would not suffice without visual notation of the timbral change.

Conclusion

Of course I’m not asking for a complete rework of Dorico’s model, just a more flexible and modern approach to multitimbral playing, especially important for contemporary composition, live electronics, and notations involving electronic instruments.
This would make Dorico not just the best tool for traditional publishing, but also a robust environment for current and future compositional practices.

Thank you for your attention and continued development.

Best regards,
Vincent

Jesper

2 Likes

You could setup playback techniques to change sounds but that is dependent on what sound library you are using and how often you’ll repeat the same registrations. I think much of what you are asking for is available via playback techniques and multiple voices (see below). I think a search of the forums may turn up some playback templates and/or playback techniques for some different organ libraries (and synth libraries for that matter) that people are using. If you use noteperformer, look at its website about organ registrations as well as the manual.

For Organ music I’ve had success with the following which doesn’t require a playback template or playback techniques.

  • In Play mode turn on “Enable Independent Voice Playback”
  • In write mode use different voices when entering your music. Use a different voice for each registration change. (At this point, just use a generic sound when entering notes).
    – For example, use voice 1 for one registration, voice 2 for another. Within each voice you have stems up and stems down available with each one able to have it’s own unique registration.
  • Once all your notes are entered, go back to Play mode. You will see, as in the picture below, a column that lists the staffs with voice and stem direction. Click on one of them, click on the “E” icon in the routing section and select the appropriate registration. Repeat for each voice type.
  • Add system text to indicate the registration change as is common in any organ music.

I do find the way Dorico refers to voices perhaps a bit confusing. Remember if you are only using say voice 1 down-stem, it will display the notes properly (anything above the middle line would be stems down, anything below would be stems up). It would depend on how complex your music is as to how many voices you need.

1 Like

jesele, JAMES_GILBERT, thank you very much for your detailed response and the technical suggestions. I appreciate the care taken to explain this method, and I fully understand how it can be implemented in the current state of Dorico.

That said, I would like to raise a few points.

This method relies on using distinct voices differentiated by stem direction, but this approach quickly reaches its limits, especially when voices cross or intertwine. In such cases, the automatic rules for stem direction become unsuitable or even misleading. It seems more logical and less error-prone for voice differentiation to rely on a musical logic rather than a graphic device, which in some cases may complicate readability.

Moreover, having to manually assign voices to different channels in Play mode, then separately add system text in the score to reflect a registration or timbre change, is not a reliable solution. This can only lead to added work and an increased risk of errors, especially if the registrations or patch changes are likely to evolve over time. Contemporary music is dynamic, and such changes need to happen smoothly and integrated into the workflow.

Finally, assigning different MIDI channels to different voices or even to distinct note blocks within the same instrument doesn’t seem like an especially advanced or complex case, especially when compared to what was possible in Finale (including assigning colors to layers to help the performer). This made it easier to manage different timbres and registrations, and the fact that this must now be done manually in Dorico adds unnecessary complexity. While it is possible to modify the color of notes in Dorico, this remains an additional step, often redundant, for information that could be simplified.

Thank you again for your attention!

In addition to Independent Voice Playback (which is very useful), you can also use Custom Playing/Playback Techniques to trigger specific messages via an Expression Map. I have used this approach quite successfully with Synths.

There are number of threads on (eg) changing Organ Registrations you could read.

Also if you want to control a specific a CC# you can use the Key Editor to change Synth parameters.

1 Like

This is no different than layers in Finale, only the number of voices in Dorico is unlimited.

2 Likes

Thank you Derrek for your message.

I want to clarify that I’m not asking Dorico to imitate Finale. I’m fully committed to understanding and working with Dorico’s own logic and architecture, which I find powerful and well thought-out.

What I am looking for is a clear and integrated way to assign different voices (associated with registrations, patches, or MIDI channels) within a single staff, so that this is visible in the score for the performer (via text or color coding, for example) and reflected in the MIDI playback — all without requiring redundant and error-prone steps between Write and Play modes.

My point is not to compare software features but to suggest a more streamlined workflow between musical notation and audio rendering, especially in fairly common situations such as for organ, synthesizer, or other multitimbral instruments. Dorico already has most of the necessary tools — it’s more a matter of coordination and accessibility.

In Finale one assigns notes to different layers, and then in Score manager one splits the staff into the various layers to assign a sound to each one.

In Dorico one assigns voices (with whatever stem direction one chooses), and then in Play chooses Independent Voice Playback to assign each voice to an appropriate sound.

They are pretty much the same.

There are parts of Dorico that are quite different from Finale, but this is not so much.

1 Like

Actually, voices don’t need to have different stem direction. For example you could have midi ch1 on upstem voice 1, midi ch2 on upstem voice 2, etc. You can also just select a second voice and flip their direction if you wanted.

As someone who does a lot of hybrid scoring involving synths, I do find the method to be reliable and I’m in full control over every detail. But could it be a little easier/faster/smoother? Sure. I agree with some of your points and I myself would like to see some finesse in this area especially for working with synths. Personally I’m even more looking forward to some method to introduce gradual playing techniques in the score, which can also connect to playback. I.e., modulations, filter cutoff, and other complex timbral changes which currently require a bit of workarounds and aren’t as binary as playing techniques triggering expression maps (i.e. using midi CC etc).

Regarding added staff text, I’m curious what else you have in mind when you say “This can only lead to added work and an increased risk of errors, especially if the registrations or patch changes are likely to evolve over time.” ? I’m having a hard time understanding what you are picturing would be the solution there. Are you saying that Dorico would somehow extract information from a connected VST or midi instrument, and parse that information directly onto the score by showing patch changes, splits, layers, CC information? If so that could be an interesting idea but it sounds pretty complex and quite niche, for the development team to take on that is. I am hoping one day the API will be opened up to third-party devs to take on such tasks as this in the form of plugin/extensions. Anyway, I’d be curious to learn more what you have in mind. If you have any real-world current examples of this kind of thing in action, either in other software or in scores, please feel free to share.

I’ve often thought it might be nice to have another system on top of the current solutions of independent voice playback and channel offsets with expression maps, this would have a use for layering of traditional instruments and could also potentially help with this sort of thing.

As an example, sometimes you may want to use two string libraries on top of each other to achieve a certain sound and more realism, and might want to do that from one part. Maybe in some parts you want to only play back with one library, in others you might to use the other, and sometimes you want to use both together. Currently you would have to use multiple staves to do this reliably, because you might need to do manual MIDI CC customizations and might want to use different curves for different libraries. You could maybe do it with independent voice playback but that is more difficult if they are playing together - you’d have to hide the other voice then or do other things like that, and remembering a voice number for a certain library is not as user friendly.

If there was instead a way of being able to associate multiple MIDI tracks and therefore sample libraries with a single staff in the score, there could be an interface to allow the user to choose whether they wanted to play the line on just one of those or some combination of them. For instance, having a single string line with a series of checkboxes for different libraries, say VSL Duality Strings, Synchron Strings Pro, Elite Strings, and I could have all three checked or just one of them, or two. If these were separate piano roll MIDI tracks so that the user could customize the CC curves for each one independently if they wanted (but the notes and dynamics were the same), it would be even better. Doing this now would practically require setting up one staff for each and copy (or cut) and pasting the notes and dynamics across, which is not the sort of thing you want to do if your printed score and parts look the way you want them to look - you don’t want to disrupt that nice engraving by cutting and pasting things around for nicer playback.

Such a thing would not be an easy task at all though I’m sure - it would be changing the architecture from a one-to-one relationship between score part and MIDI part (or between score voice and MIDI part in the case of independent voice playback) to a one-to-many relationship between score part (or voice) and MIDI part.

Yes, please!

Danger, I’ll probably say too much on this one. :slight_smile:

I think there is a pretty definitive zombie thread from 2016 where Daniel and lots of others weighed in on what makes a voice a voice versus a texture or layer. It comes down (in my understanding) to voices having melodic and rhythmic independence. Whereas the intended characteristic of a layer is kind of the opposite IMO. I’m not saying either is right or wrong. I’m just suggesting that the independence feature of voices will get in your way when you want layers, and vice versa.

So visually how would you want to see this notated, and what is the easiest way to write it or play it?

I’m not expecting consensus or to be right or anything. I have a grand staff in Dorico (and just a single midi channel) going to the bottom half of my keyboard rig. I think of it as the Old Testament, because it contains a Prophet and 5 others. All six are driven simultaneously through the rig, though they are separate hardware or software instances. There is a volume knob for each one, and at “zero volume” no notes are sent to the corresponding synth.

I’ve come up with a lot of notation ideas and tricks that in the end just seemed dumb to me. Whereas it seemed straightforward to just read “add Brassy Pad” or whatever in the score, and I know to crank the corresponding knob. Or - confession- what I write is “add Brassy Pad {L1} The actual playing technique that Dorico understands is the {L1} or {L2} or whatever. The “add Brassy Pad” is pure text. The reason I’m currently doing is that (IMO) the text is a lot better to communicate what is actually happening, but then how would a director look at you and signal that the pad is too loud? Combined with {L2} they can look at you and sign “L2 “ or “2 “ with one hand and indicate a lower volume. And of course I also don’t want to create a new playing technique for every patch.

{L1} is meant to be used with a dynamics marking. If you use {L1} for a note that also has a map dynamic, that layer (and that layer only) will be set to mp. The {L1} playing technique has a line continuation, so it is possible to write crescendos or other longer dynamic changes for just that layer. And —{L1} is a playing technique to mute a layer.

As far as zones - I prefer doing those not with midi, but within the synth itself. Though I find notating splits to be problematic period. Take the PS 3300 for example. You can patch it so that anytime you add say more than three notes to a chord, the extra notes will add a different layer. Or remove a layer if more than two notes are playing, so as to keep things cleaner. Or have layers that fade in only after a note is held longer than a half note, and simultaneously have an OB8 layer with up to eight splits, each of which might be tracking the highest or lowest note within its area…

What I’m really saying that I don’t notate splits in detail within a score usually. I might write “Prophet Bass [B1:B3] Lead [A5:A8]” but I don’t try to tie that to any kind of playing technique. I just save it in the patch. I’ve thought of using different noteheads for my own remembrance, but mostly I just play the notes.

The upper half of my keyboard rig is “New Testament” to me. It’s a Korg Wavestate hardware synth, and one of its features is that changing the patch while it’s being played does not have any impact on notes that are already being played. So you could play and hold a cello patch, switch to flute patch and play/hold flute notes, switch to something else … Oh and each patch has 4 layers with splits/zones… So this one gets its very own grand staff by itself instead of trying to have layers inside of a layer. I notate its internal layers and splits if needed like the others. Wave sequences… :thinking:

For patch changes I use playing techniques like PC:A4. Which I know isn’t great, but A4 matches the bank and patch scheme of the wavestate. I would like to see parameterized playing techniques as maybe an enhancement. Like having a playing technique to which the actual patch change # could be passed?

Just me though.

1 Like

Thank you all for your feedback!
Some of the points raised here — when not specific to personal setups that may have limited relevance for the broader development of the software — are interesting and could open up new perspectives.
As for me, I still believe that a coherent and integrated solution for managing multiple voices on the same staff, both for MIDI rendering and for performer readability, without redundant operations or the repurposing of existing tools and software, would be truly beneficial, particularly in the context of contemporary composition practices.

The question is whether this kind of multitimbral capability should exist within Dorico or reside in VSTs such as Garritan Aria (which can channel a single MIDI input to multiple instrument slots) or even in a DAW (Abelton?) used as an output for Dorico.