More human sounding quantization options for Dorico?

This is exactly my workflow, when Dorico finally came along I could chuck the DAW and start writing real music without going blind staring at a MIDI roll. But there are times where you need to for humanizing so the addition of the Piano roll recently has revolutionized that for me as it’s so easy to do now (finally marrying the MIDI directly to the score).

So I’ve been doing this for a long while in Dorico, and I’d question the need for this kind of quantization - is it really necessary, or is it a hangover from the fact we all come from DAWs? Dorico does a good job of doing all the basic humanizing, but it’s not good enough for real playback. What I’ve found however is that all is needed is three magic spices

  • Shorten/lengthen actual notes in prominent phrases, basically add breath marks (if Dorico would do this for us from a breath mark notation that would eliminate this step as it could be notated)
  • Draw in CC for phrase shaping. I use straight lines since that’s how us wind players do it (none of this fader massaging people do), this usually follows the same phrases I did in the previous step, so they’re done simultaneously
  • Add in subtle rubato - push/pull your tempos at again usually important phrases.
  • Tweak up any other miscellany (usually not a lot of this)

That’s it, and it gives far better output than any algorithm could do unless somebody comes up with a DNN to analyze the music. And notice the process is really just what a conductor would do, which is what I’m am at this point.

1 Like

/highly opinionated view begin/
Put differently, quantization jitter was invented because we had DAW’s without notation (never mind they all add some form of notation, it’s a hack because it goes the wrong way, going from the granular (sounds) and attempting to reconstruct the score). Personally I believe the fact that MIDI and DAW’s are this way is why we have so much music that is industrial, it clacks along like machinery instead of being really analog. I mean the word “Humanizing” the MIDI says it all there.

So personally I don’t think this has a place in Dorico at all, let’s not stoop so low and bring in a feature that is not necessary and would spoil the beauty of being able to work with notes. Quantizing was needed because of the volume of MIDI data that is essentially unreadable - it’s meant for a pea brained processor on a MIDI instrument, not for human consumption. It would be like if us programmers went back to reading binary code, instead of the high level languages we actually use.
/highly opinionated view end/

So yeah, I don’t think we need it or want it, keep your eye on notation, and continuing to make the Piano roll easy to use for the necessary MIDI work.

1 Like

@DanMcL I agree with everything you said.

Dorico’s development resources should focus on the engraving mission (where much remains to be done), not fret over playback, for which other tools are, and always will be, better suited.


In practice with this workflow, now all I’m using Nuendo for is multichannel. I take in MIDI from Dorico and put it into an ATMOS bed, which eventually gets rendered out to the various formats. In this way the Dorico file is the primary data file that is sacrosanct and religiously VCS’d, all edits go into there, and if there’s some rework that needs doing I go back to Dorico and regenerate the follow on datafiles (MIDI data and Nuendo project).

Much like how in digital art, the (say) Blender file is the primary, but the *.fbx, texture files, alembic files and so forth are just generated as needed.

So one thing that would be very handy is if the long rumored Dorico->DAW connection could be realized. If in one click I could just send my MIDI over to Nuendo/Cubase and have the instruments all land in their proper lanes that would be a big time saver.

1 Like

My English is not very good so @janus Thank you

I’ve been using Dorico much more for DAW-type things since 4.0 came out. On the engraving vs playback balance, I’m much more engaged with the latter, but improvements to the former are always welcome and actually there’s plenty of synergy there.

I use Dorico 95% for notation, but I suspect there is a rather large user base out there that would disagree with you.

Not disagree that notation is more important, or that there are better tools out there, but that playback is sufficiently unimportant to not address better tools for more realistic renderings (which the team has been doing).

But this horse has received a post-mortem beating plenty of times already…

Perhaps. But I’ve noticed an uptick in ‘complaints’ about playback (and performance manipulation) as if it is a core raison d’etre for Dorico, which it is not. And I fear the dev team might be browbeaten into taking a wrong turn.

There has also been a noticeable increase in the number of folk arriving with no clue about the rudiments of notation, but expecting they can just dump their DAW exported midi/xml files into a sausage machine and out will emerge the score of their dreams. Whereas the savvy users (yourself included) have long accepted the need to use ancillary tools (eg InDesign) to achieve their goals.

I would far rather the dev team put their efforts to sorting out the relationship between flows and text frames (etc.) or resolving the condensing more than the first instrument conundrum.

1 Like

I’m afraid you’re mistaken here: we do consider playback to be one of Dorico’s core capabilities. If we didn’t, we wouldn’t have a dedicated Play mode, or be dedicating considerable effort towards building MIDI editing features. One of our goals for Dorico is that you should be able to use it to produce a workable mock-up without needing to take the project into Cubase. We’re still some way from that goal – just as we are some way from another of our goals, which is that you should be able to use it to produce publication-quality inners for music-based print publications of arbitrary complexity, without needing to take it in to InDesign.

Dorico contains multitudes, and has to satisfy the needs of users with very divergent requirements. Not all features will be of interest or use to all users. But we will do our best to address those requirements across the full breadth of use cases that are important to our users in the aggregate.


[google traduction]
@Janus and @dspreadbury d
I must say that for some time while reading this forum, I also have some fear and somewhat share the opinion of Janus when in the future of Dorico for my use and which seems to move away a little from “Dorico Music Notation Software” (in such case at the moment).

For my part I let the team decide what they need to live and also where to start and where to end… with or without me is really not important.

Finally the future will tell us. And maybe one day After effects or Premiere will go to midi, Indesign to video and music notation, etc… and once again with or without me it really doesn’t matter.

If I had a golden buzzer, I’d use it on this post. Well said, well done. There may be some Avid readers who disagree :wink: but on the play side I think that the quantize features in Cubase represent the gold standard, and if they landed here in an even better form - Wow. The histogram feature expanded for duration with some other AI intelligence around it?

The print and engrave functionality is important to me as well…


Thanks for the suggestions. I have tried similar approaches. I like to be able though to use the parts I record as much as possible, and really perform them with a breath controller for the ccs, because that gives more realistic results. Furthermore, it is also more efficient for those who are great performers:)

I have to say I disagree with this. To my mind, it’s an audio file that is a stream of data that’s always playing back at fixed tempo (sample rate). The individual events (e.g. the notes and tempo) were done during the recording. In contrast to that, MIDI exposes each of these individual events and allows to control them. MIDI is inherently more musical and fluid as it runs on PPQ (pulses per quarter note, as opposed to samples per second with audio). So, the problem of any DAW is how to combine rigidity of audio and flexibility of MIDI into a single view even though they are using completely different temporal progression methods. That is the reason the infamous “grid” came into existence. Audio and MIDI events can now be perfectly aligned on this grid, but the price of that is that we need a “click” to ensure events remain in sync and a special “track” to manipulate the tempo. If we work only with MIDI, we do not need the grid at all - we just record MIDI with complete freedom and enjoy the result. But we can’t escape it if we need the result to be aligned to something, like the events grid in a DAW. I have no idea if Dorico must have its own version of the grid (e.g. the bar lines) but it certainly seems so. In either case, the grid is now the starting point and the dominant framework. And so it makes perfect sense to “quantize” the MIDI events in bulk with key commands. It’s no different conceptually to drawing CC curves. EDIT: In this sense, it would be really amazing if Dorico could “extract” the PPQ tempo track from any live MIDI playing and allow for it to be applied, edited and manipulated as a separate element altogether.

Yes - it’s performer notation, which is why in Dorico it’s so useful to be able to draw CC’s.

But its still a level of indirection. If you understand the software compiler chain it’s much like Code->Assembly->Binary. Here it’s Notation->MIDI->Audio bits. Data chains typically have this three level pipeline (as do many other things, three is a magic number). So DAW’s are always deficient - IMO - in that they put you right in the middle of the chain at MIDI. If that was so great why don’t composers play all the instruments themselves? It’s a Separation of Concerns.

Feel free to disagree, but I work with this pipeline pattern every day in 3D art, music and programming, I don’t see MIDI as anything but derivative data. Which is not to say it’s not important, or if that’s how you happen to want to work that’s fine too.

Indeed. If we put MIDI into the performer realm - i.e. following notation, as you suggest - then we implicitly accept the primacy of the grid. Which is why I didn’t get why you’d be against MIDI quantizing since it’s so similar to drawing CCs.

I’ve kind of skimmed through the thread so forgive me please if someone’s already offered this up:

Some sort of Groove track…

I’d suggest a global one, as well as per-track ones that can either over-ride the global groove, or average it out in relation with the global groove.

  1. Design your own groove on a grid, or by importing MIDI. As a nicety there might even be a bunch of standard groove templates to choose from to further tweak. Such a groove could have hit-points for note-on timing, as well as optional note-off hit-points.

  2. Have some default settings on how strictly notes clamp to the hit-points in the user groove. I.E. 100% would cause notes within’ x ticks of a groove hit-point to snap to it on the playback timeline. 50% would move it half way towards the closest groove track hit-point, etc.

  3. Offer the ability to adjust this magnetic quality, as well as some randomization parameters for the positions and lengths of individual notes, or a selection of notes.

I.E. Click/drag/whatever to select a bunch of notes anywhere in the score. Get options to ‘randomize’ how tightly the selected notes clamp to the nearest hit-points.

  1. Each stave could also have an independent groove track. It’d be optional to use this. Why have it? Sometimes a given player/section might benefit from being independent from the master groove. These stave groove could also have options to either ‘over-ride’ the master groove, or various options to ‘average out’ the hit-points between the two (perhaps with interesting randomizing effects as well).

4. Document how to get at this stuff for LUA scripting.

  1. Provide some kind of track where LUA scripts can be launched and implemented in real time. This could also be helpful for lots of playback stuff outside the realm of timing and groove control.

I do realize that the processing power required to do real time processing via scripts might throw a kink in things. Is the playback engine designed in a way that it could be done in real-time anyway? Or does Dorico pre-calculate how it’ll translate a score before the transport even starts rolling?

Still, it’s something I’ve long wished for in Cubase. A special track type that the sequence itself could ‘launch’ things like logical editors, macros, key-commands, scripts, etc.

1 Like

Thank you for this clarification, Daniel. +1 for more quantization options.


The idea of “Groove Templates” has been integrated into some DAWS (can’t remember which offhand), and seems like a nice way of organizing these ideas. Some Virtual Instruments that include built-in sequencing , for instance, the virtual percussion instrument “SKAKA”, from Klevgrand allow the user to adjust the timing/groove of hits in an easy-to-use-and-understand way.

This discussion puts me in mind of the way that some music producers assemble loops and samples whose grooves “disagree” with each other in order to create new feels. I think some drummers/producers have a term, “slumping” for some of these types of grooves. We’ve probably all heard this kind of music in movies/commercials/pop hits, etc.

This kind of thing goes very deep, but the tools needn’t be overly complex.

1 Like

I would really love this! As well as ability to connect MIDI quantizing to dynamics in Expression Maps - for instance slightly ahead of the beat for crescendo, slightly behind the beat for p/mp and all kinds of other agogic nuances.


To get the best Dorico for their own particular purpose, users need to ride the bus with people whose purposes diverge from theirs. It’s not a zero sum game, it’s a game of addition (to the customer base) that will keep Dorico and it’s team in healthy shape going forward.