Interpreting Human Performance in Music Notation Software

I am considering a Dorico purchase, but I would like to understand how the application handles (a) live recording and (b) MIDI file importing exported from Cubase. This ties into a specific workflow need. When I record in Cubase, I frequently record without a click track, because I want to capture a performance’s feel. Let’s set aside creating a tempo track in Cubase and align the free-style recorded tracks to the grid. This rarely works out as expected, and it requires a large time commitment, disrupting the workflow.

Suppose I record freely into Cubase, in combination with VSTi plugins and live recorded instruments. I realize that I will need to convert the live recorded audio into MIDI within Cubase first before I export all the MIDI tracks to a music XML file. What I want to avoid is the tedium of “fixing” the live performance in its MIDI form such that it makes musical notation sense. This typically requires quantization and MIDI note duration adjustments, which both suck the life blood out of a performance. The whole process is tedious and a giant weight around the neck of a creative workflow.

What feature set does Dorico offer to intelligently interpret these imperfections that are captured in MIDI so that the score does not look like a fly spilling ink on paper?

The second scenario is using Dorico to sketch out compositional ideas. At times I will want to work primarily in Dorico but do not want to spend time entering my ideas note by note. More often than not, I will have already sketched my ideas in pencil on score paper. I find this to be easier and more intuitive. However, I want to digitize my score/sketch with a professionally engraved look. Can I play my score/sketch into Dorico in real time and without being constrained by a metronome? Does Dorico define a set of rules to interpret the time and note duration imperfections of a live recording into readable music notation? If possible, I want to reduce the notation note-input tedium and focus instead upon using Dorico to detail articulations, dynamics and score instructions.

Your feedback is appreciated. Thank you.

You can quantise the notation without changing the playback ‘timings’ of the notes.

Realistically, the further your live performance deviates from the metronome pulse, then the less accurate it’s going to be. Any rubato is going to ‘confuse’ Dorico. It is a long note, or just a slow one? You can of course add things like rubato to the playback after you’ve entered the notes.

If you’re just trying to copy notes in from a score, then I’d do it to a metronome beat. You can change the sound of the pulse to be ‘softer’, if that helps.

I’ve recently been watching some of the You Tube videos on Anne-Kathrin Dern’s channel. Anne-Kathrin is a film composer with a strings of well-known films already to her name. I would say that anyone interested in getting professional quality results from samples should tune in. Not just for a wealth of useful tips but also for good, down-to-earth advice.

I raise it here because I’ve always come from the angle that human performance is an essential component of creating a good mock-up. As it turns out - not necessarily so. Anne-Kathrin writes everything in with the pencil tool and then quantises it.

In this example she takes a short piece of music from Lord Of The Rings and creates her own mockup. She’s using Cubase, articulations on different tracks (as opposed to expression maps) and several different libraries. The mockup relies heavily on layering different libraries and it’s that, and the musical phrasing of each written line, that brings the piece to life.

Dorico could improve the tools to help users achieve the musical phrasing they want - but as things stand it’s already possible. Layering, however, is a different issue. We’re getting away from the score and into the realm of phantom players. And then there’s the mixer and routing etc…

Still, the thought that human performance is not such a critical requirement gives me hope that one day Dorico might find a way to enable users to achieve results like this. After all, writing it in is something Dorico is exceptionally good at.

1 Like

What I’ve done with my DAW (I use Reaper) with pretty good success is to tempo map the recorded MIDI by adjusting the grid lines to it. This effectively creates a bunch of tempo changes, sometimes even one per beat, but importantly it aligns the notes to the rhythmic grid without adjusting their timing.

When the MIDI is imported into Dorico (including the tempo data), the notes will be displayed with sensible durations, but importantly they should play back at their recorded positions. From there, you may make spot adjustments in Write and Play mode to adjust the notes’ appearances, timings and velocities, as well as the tempo changes.

1 Like

whatever you do, don’t expect Dorico to be able to easily tidy up freely played music. There is of course quantizing and some basic performance styles but it’s not that sophisticated as yet. You’ll need to find effective tempo mapping in a DAW. Maybe @JesterMusician 's suggestion of Reaper does this better than Cubase,

Another approach is to use VST’s that have a lot of humanising programmed in. Modern libraries from vendors like VSL, Spitfire or CSS can actually sound pretty musical even when playing in strict time because of various humanising features. But if it’s your own specific timing “feel” you want, you’ll probably need to go down the tempo map route as things currently stand.

That sums it up very well. In this area, music software has not evolved at all over the last 25 years.

There actually has been some progress in this area, it just hasn’t come to fruition yet. Check out

I suspect the solution would involve artificial intelligence.

Hehe, have you tried it…?

Yes, and the handful of things I tried weren’t bad. Certainly better than the standard fare. XML export was hilariously bad.

All I’m saying is that I think there is some opportunity here, it’s just not yet developed to maturity.

It’s been around for years and I haven’t noticed any real development in the interpretational side… And XML export still requires their Pro subscription plan at $20/month (139/year)… :roll_eyes:

That first part I didn’t know. I am intrigued by the concept of intelligent analysis. Everyone else basically just says, “You have to play to a click; get used to it.”

I think it’s worth remembering that a score is essentially just a set of instructions for a human to interpret. It’s a given that the performance of a piece of music will change way beyond metronome markings, expression etc, in real-time. The ability of DAWs to capture (midi) performance and play it back more or less exactly in real-time confuses the issue. Why score something that is never going to be interpreted by a real musician (if your music is being rendered using VSTi etc)?

The aim is to score in such a way as to give the musician as much information to convey your wishes for its performance as is humanly (!) possible - but you’re never going to be able to get exactly what’s in your head on paper using a software program that relies on time-based management of data.

Way before music notation software was invented composers were constantly adding techniques and vocabulary to the scoring process to better express their intentions - “…and still the beggars play it wrong!”

I think the best way is to see the notation process as a separate step. If you already have a musical version in the DAW there’s no need to redo that in a notation program. Quantize it to death and save time for notation. Also when recording directly into Dorico: play it the way you want it to read and not the way you want it to hear. It’s more like recording a text for dictation instead for an audience.