First of all, let me say that coming to Dorico from Sibelius and Musescore has been an absolute joy! The fact that Dorico seems to “understands” music, and isn’t just about putting symbols and text on a page, in my opinion really sets it apart.
One issue I’ve been running into:
I’ve been working on some solo piano pieces and have run into a minor frustration in how the playback of dynamics works. In the piano music of Mozart, Beethoven, Chopin etc., we typically see one dynamic marking between the staves, for example:
However, the performer is not expected to play both hands at the exact same dynamic - rather, to oversimplify, the melody should typically be “brought out” to the foreground and the accompaniment should typically stay in the background.
By default in Dorico, the exact same dynamic ranges are assigned to all voices in both staves. In fact, if the melody is finishing a phrase while the accompaniment is beginning a phrase, the first beat emphasis may make the accompaniment even louder than the melody.
This is not an ideal situation for playback, and I have been looking into ways to fix this in my own projects. Some ideas:
- Voice-specific dynamics (https://steinberg.help/dorico_pro/v3/en/dorico/topics/notation_reference/notation_reference_dynamics/notation_reference_dynamics_voice_specific_c.html)
This strategy works fine in principle for simple pieces (where we could just hide the lower dynamics for the left hand, for example), but would be very tedious to use for longer, more complex works. As an example, imagine mirroring complex changing dynamics between multiple voices, with one voice being a dynamic level (or more) louder at all times.
Additionally, with the limitations described here, this is even harder: https://www.steinberg.net/forums/viewtopic.php?f=246&t=202049&sid=2cfae18fa25fea61707c48c66e6233e7
- Enabling independent voice playback (https://steinberg.help/dorico_pro/v3/en/dorico/topics/play_mode/play_mode_tracks_instruments_independent_voice_playback_enabling_t.html)
This strategy, with which I am currently experimenting, seems better than (1). Theoretically, I can separate the different voices for playback and use the mixer to make accompaniments lower. This way, we can utilize a single set of notated dynamics, and bring a whole voice up or down “in the mix” to accomplish the desired dynamic balance.
The only drawback to this, as I can see so far, is that we’ll potentially need to artificially create some new voices if, for example, the melody passes from the right hand to the left hand. But this is not insurmountable.
- Feature proposal: Ability to “annotate” sets of notes as melody / accompaniment (or some other, more general terminology). This would accomplish something similar to (2), but would be slightly more direct and optimized.
Anyway, I would love feedback on whether or not I’m missing something in terms of what is currently available, and also whether or not the feature proposal in (3) is indeed something that could / should be implemented.