Open Dorico's interpretation engine

I’m taking my thoughts from scoredfilms and breaking from that thread to make this topic global for any virtual instrument.

This, this, this! There has been a lot of talk about “humanization” recently and this is the first proposal that strikes a meaningful balance between automation and sculpting control, obviously with regard to dynamics. But it got me thinking about a more standardized application. What we’re all trying to do here is help the computer interpret notation the way a human would. It was an excellent step for the Dorico team to build an interpretive language between notation and musical meaning, something other notation programs lack. On top of this, they added further refinements, including mapping SFZ to CC data along with other articulations and playing techniques. They recently exposed these mappings for manual editing in CC lanes on the Play tab. Separately, humanization options are baked into expression map configuration and generic dynamic and timing settings of Dorico as a whole.

These are all fantastic developments that work together and lead to, what in my mind is, a logical balance of automation and sculpting control. If taken to the next stage, a universal method of humanizing playback would be to open the interpretation engine entirely.

Instead of SFZ marks resulting in a specific curve that can later be edited in the CC lane, allow editing in the expression map, or perhaps what you might call the “interpretation map”, “performance map” or “playback map.” Whatever the map is called, it makes sense to extend it far beyond controlling singular moments in time, as the Dorico team have already illustrated with SFZ.

To achieve the most realism, an expression map should allow user-defined CC curves for the life of the note as scoredfilms suggests. Like SFZ and other closed Dorico interpretations, a performance defined in the map would be exposed in the Play tab for continued editing. These performances are defaults, like we have today, only a lot more human and editable. Sculpting defaults won’t get you away from needing to polish each note once written, but it will get you much farther along. Perhaps far enough to make Dorico-wide settings like timing and dynamics much more meaningful. After all, applying a bit of randomization on a group of individually crafted human performance notes is better than applying randomization on static midi. But…

Adding access to CC data over the life of the note is just a beginning. Really there are other tools that can leverage the power of automation and algorithms in conjunction with human touch. For example, timing can be made part of the performance map, and humans can decide (as opposed to Dorico) when notes should start a bit early or a bit late. It’s nice that we can manipulate length with a setting, but imagine how much more powerful it would be if you could literally draw the CC of the note in its entirety–where it starts, where it ends and where it travels in between. There are different UI paradigms to simplify these inputs and there is no reason Dorico couldn’t port the Play tab “playback duration” model and allow you to visually establish default played durations directly in the performance map.

Consider other handy tools like “relative” vs “absolute,” something that already exists for changing channels. When applied to a CC curve (i.e. drawn curves should apply relatively [at existing CC values] or absolutely [CC values are reset]) you have powerful control. It means you can humanize a velocity curve and have it work at different velocities. You can take this even further with concepts like flattening the curve at lower velocities and exaggerating it at higher velocities to provide truly unique, but not random, performances. Or perhaps your performance map can include a “bucket” of human performances from which to draw in round-robin fashion. These could even be grouped by condition so you could set up 3-5 unique performances for each note length, dynamic or articulation.

As many have said before, the effort here is pre not post. You invest in setting up your performance map so you can then notate without needing extreme post-processing. This is not possible today. Even with all the great existing Dorico features, a heavy amount of post work is still required to get respectable output. I would rather have the option to sculpt the interpretation engine than the requirement to edit every piece of music I write so it’s performance worthy. Like I said, some editing will always be needed, but this can be reduced to mere polish if the interpretation engine accepted more detailed human guidance and instruction.

I agree completely. It would go miles towards better default playback.

Right, I tried to account for this in what I said. Thus dynamics and note lengths both affect the curve I’d chose to have used. Higher dynamics should come down a bit. Lower need to swell, etc. Per technique I hadn’t thought about, as Infinite is very consistent, BUT it’s still valuable to think about that. Complex, but… definitely relevant.

Editing will always be necessary, but you certainly don’t want to edit every last note to be realistic and human.

Thanks for the feedback. Certainly I think there is mileage in us making it possible to define these kinds of curves for automating controllers and then applying them automatically. We’ll discuss this internally. Obviously I can’t at this stage commit to a particular timeline for any changes, and we already have a number of important things lined up in respect of improving interaction with Play mode, but of course I can see the benefits of this kind of functionality, so we’ll give it due consideration.

It’s a thoughtful proposal. Taking a step back, a DAW approaches humanizing by micromanagement of the CC’s, mixing techniques and such. I kind of hate that, so try to do my work in Dorico as much as possible. I go cross eyed looking at a Piano Roll and much prefer a score. So having more control for the notation->performance mapping would be very powerful indeed, and more of a musical approach than the DAW micromanagement, which accurately is called “programming”.

an expression map should allow user-defined CC curves for the life of the note as scoredfilms suggests

Yes. And when a further touch is required, the ability to “conduct” the performance. Here’s my dream …

  • Dorico is playing the score and I’m following along
  • Dorico is applying continuous CC defaults for note durations, as suggested above
  • As I’m listening I’m conducting the music
  • Left hand is on sliders or the pitch bend for CC tweaking
  • Ideally this would be in Play mode, with the addition of a new center tab showing a read only view of the score (so switch between score and piano roll, like Logic and others do it)
    • Support a tweaking input - e.g. one of those common pitch bend roller wheels that centers at zero, that control will ± from the present setting. E.g. if the Dorico generated (or previously recorded) CC curve says 100, then you can ‘pitch bend’ around that value.
    • Also support sliders for absolute control
    • Important: support keyboard hits to change velocity. Ignore the note value, let any keyboard hit velocity be the velocity value. Bonus: support CC=velocity mapping. That would be amazing, I don’t think anybody does that and I don’t know why, it forces the ‘play in’ method of music entry in a DAW (which leads to poorer composition in my experience)
    • Bonus: have a controller input which selects CC mapping - e.g. Pitch Bend -> Expression (so the pitch bend wheel can be used for multi expression). Say we set CC 50 to select which CC pitch bend is mapped to, then a simple table, e.g. 0=CC1, 1=CC2, 3=CC17 … This allows dynamic switching between what I’m conducting - expression, tightness, reverb, vibrato, …
    • Also make it obvious what CC we’re editing here. Put a display up by the automation on the top bar showing which CCs are being manipulated
  • Right hand is on either the whole ensemble, or some section or member
  • How to tell the computer who I’m pointing at? Some ideas
    • Notes on the keyboard (could be hard to remember)
    • Just put it on a CC and let us decide (e.g. I’d use a button panel). Just let us enter a CC (like 100), and so CC100 value 0 is the first player, 1 is the second, etc … with some provision for groups.
    • Screen selection (a little clumsy) - maybe with a custom popup

This would be very musical and much quicker way to program. With the defaults in this thread we should have a nice, basic humanization. Then allowing us to live conduct using our two hands (and maybe feet on controllers) we can conduct with our hearts, and is a skill we already have instead of having to learn a new one.

I think the only important question is one, to what degree should Dorico become a DAW and two, to what degree should notation be allowed to drive performance versus the live playing (live playing in this context largely means CC/velocity/articulation editing)?

On the first point, to my mind sophisticated mixing, bussing, plugins and effects generally belong to a general purpose DAW. Dorico has some minimal support already which is fine. Conversely many DAW’s have some minimal support for notation, and at that very little support for turning notation into performance (e.g. knowing what to do based on slurs, staccato, etc). So Dorico should take the industry lead in this area, and the ideas posted above would do that I think.

On the second point, OK in an ideal world it’s left up to the composer, but certainly at some point having a score which micromanages the player probably isn’t desirable. So performance aspects of humanizing (with CC’s, note velocity and articulation changes) is important here. Dorico already has a great foundation, and having thought through this it seems to me that with the above extensions it would establish Dorico as the premier notation app, from score to mockup.

Given the above I could do 90% of my work in Dorico, then simply export the MIDI mockup over to a DAW for the final mixing process. That would be glorious. Putting these features in Pro would further enhance that tier, and don’t make as much sense for the other tiers anyhow.

This is a perfect transition from individual humanized note performances to phrase performances. I wouldn’t be one to record multiple performance CCs simultaneously or attempt to record performance CCs across instruments in a single take, but having the option is nice. For me, even the basic flow of selecting a single instrument, choosing a performance parameter to record, and then conducting (or “playing”) that performance would be incredibly awesome. No DAW needed because, as you suggest, this performance is more appropriately handled in Dorico on account of its musical nature.

In addition to playing in velocity hits without affecting pitch or duration, I’d want alternatives as well. That is to say, allow “locking” of pitch, but play in duration without affecting notation. This way you can completely humanize the rhythmic feel. And there could be some guards in place to ignore performances that are too far outside of quantization settings, etc. But you get the idea that generally if you want to humanize a certain aspect, you should be able to do so via live performance. This could be done with any number of MIDI controllers, including drum pads for example. Quick controls to record performance data. Just imagine if you could select 1 or more instruments on the score, choose a performance parameter to record, hit play and then give a personal performance for that parameter only, without affecting anything else in the score.

  1. Humanize rhythm & duration = hit any key, drum pad or on/off midi controller during playback
  2. Humanize velocity = hit any key, drum pad or on/off midi controller during playback
  3. Humanize expression = ride any fader or breath controller during playback

These are musical concepts that take advantage of an underlying score. Consequently, they make more sense in Dorico than a DAW. So, finally I agree with you that Dorico is the place where performance and humanization should be delt with, and a DAW is the place where audio engineering should be handled:

Just to clarify what I mean here by using a drum pad, you could choose to record only the “note start” performance parameter and use a drum pad to trigger the note without affecting its duration. Or you could use a keyboard to perform note start, duration and end.

Good catch. Nobody supports this AFAIK, which necessitates playing in all the music against a click which I’d prefer not to do. It’s on my list to see if I can hack it with the MIDI javascript scripting that Logic supports but haven’t done it yet.

I think this is a capability sorely lacking. If you listen to much media music being written these days it follows a similar template, which I think comes from the fact that it’s whats most conducive to composing at the DAW. Personally I think if people would simply compose to score as we used to, better music would be produced. But this still leaves the most tedious step which is humanizing the performance (which is why people don’t do it to score - not enough time in production schedules).

If Dorico could combine humanizing and scoring into these two steps (compose and conduct), I think it would appeal to a lot of media composers out there who presently don’t notate at all (except after the fact by an assistant if it’s going to be recorded).