One of the most revolutionary features of Dorico is the separation between notated and played duration. It’s the opposite of the visual quantization feature of the DAWs, and one that can work better for composers who like to start from a written, not a played score.
Has someone tried to use this feature to make playback more expressive/realistic? How is your impression? Is this feasible, or does it require an excessive amount of time, compared to playing the music on a keyboard? Can the results be used for a mockup/prototype?
I’m just experimenting, but I wonder what others discovered around it.
Of course it can, but, as you intuited, it’ll take you longer than you might be expecting because you’re not interacting with the parameters directly, with a controller. So far, you can input the offsets by text, via the properties panel, or graphically, via Play mode. These are, however, very useful and quick when you’re just applying specific fixes while working; starting from scratch and modifying all values might not be too speedy. It’ll be interesting when Dorico implements real-time MIDI recording, which will hopefully yield the played position and the straight quantized value at the same time. In fact, I think you can already import a MIDI file and have it keep the offsets, though that’s not something I’ve needed, so I can’t speak from experience.