I’m taking my thoughts from scoredfilms and breaking from that thread to make this topic global for any virtual instrument.
This, this, this! There has been a lot of talk about “humanization” recently and this is the first proposal that strikes a meaningful balance between automation and sculpting control, obviously with regard to dynamics. But it got me thinking about a more standardized application. What we’re all trying to do here is help the computer interpret notation the way a human would. It was an excellent step for the Dorico team to build an interpretive language between notation and musical meaning, something other notation programs lack. On top of this, they added further refinements, including mapping SFZ to CC data along with other articulations and playing techniques. They recently exposed these mappings for manual editing in CC lanes on the Play tab. Separately, humanization options are baked into expression map configuration and generic dynamic and timing settings of Dorico as a whole.
These are all fantastic developments that work together and lead to, what in my mind is, a logical balance of automation and sculpting control. If taken to the next stage, a universal method of humanizing playback would be to open the interpretation engine entirely.
Instead of SFZ marks resulting in a specific curve that can later be edited in the CC lane, allow editing in the expression map, or perhaps what you might call the “interpretation map”, “performance map” or “playback map.” Whatever the map is called, it makes sense to extend it far beyond controlling singular moments in time, as the Dorico team have already illustrated with SFZ.
To achieve the most realism, an expression map should allow user-defined CC curves for the life of the note as scoredfilms suggests. Like SFZ and other closed Dorico interpretations, a performance defined in the map would be exposed in the Play tab for continued editing. These performances are defaults, like we have today, only a lot more human and editable. Sculpting defaults won’t get you away from needing to polish each note once written, but it will get you much farther along. Perhaps far enough to make Dorico-wide settings like timing and dynamics much more meaningful. After all, applying a bit of randomization on a group of individually crafted human performance notes is better than applying randomization on static midi. But…
Adding access to CC data over the life of the note is just a beginning. Really there are other tools that can leverage the power of automation and algorithms in conjunction with human touch. For example, timing can be made part of the performance map, and humans can decide (as opposed to Dorico) when notes should start a bit early or a bit late. It’s nice that we can manipulate length with a setting, but imagine how much more powerful it would be if you could literally draw the CC of the note in its entirety–where it starts, where it ends and where it travels in between. There are different UI paradigms to simplify these inputs and there is no reason Dorico couldn’t port the Play tab “playback duration” model and allow you to visually establish default played durations directly in the performance map.
Consider other handy tools like “relative” vs “absolute,” something that already exists for changing channels. When applied to a CC curve (i.e. drawn curves should apply relatively [at existing CC values] or absolutely [CC values are reset]) you have powerful control. It means you can humanize a velocity curve and have it work at different velocities. You can take this even further with concepts like flattening the curve at lower velocities and exaggerating it at higher velocities to provide truly unique, but not random, performances. Or perhaps your performance map can include a “bucket” of human performances from which to draw in round-robin fashion. These could even be grouped by condition so you could set up 3-5 unique performances for each note length, dynamic or articulation.
As many have said before, the effort here is pre not post. You invest in setting up your performance map so you can then notate without needing extreme post-processing. This is not possible today. Even with all the great existing Dorico features, a heavy amount of post work is still required to get respectable output. I would rather have the option to sculpt the interpretation engine than the requirement to edit every piece of music I write so it’s performance worthy. Like I said, some editing will always be needed, but this can be reduced to mere polish if the interpretation engine accepted more detailed human guidance and instruction.