I’m excited about the workflow that Dorico’s playback engine will bring once it’s developed a little more. I too am anxious to have more control over score interpretation as a user, to be able to fashion my own techniques, have fall-back sounds, fully control libraries that use players other than HSSE, etc.
The Dorico Team does listen to users. We asked for a status lane to help test and trouble shoot our expressionmaps, and they gave it to us. We asked for controller lanes as an option we can use NOW for more easily and intuitively entering expressive data (it was pragmatic and simple…until we get a more advanced technique maker, and fully implemented expressionmaps), and they gave it to us. The people who are actually putting scores in front of publishers and world renowned orchestras/bands/vocalists DID ask for custom note-heads…and they NEED them more than they need a simulated audio mock-up.
There are long lists of things users are still ‘asking for’…and most of it will certainly come in due course. It’s on the giant white board of ‘stuff to do’.
The thing is, I also use Finale, Sibelius, MuseScore, and CuBase Pro (Score Module) pretty regularly. I make fairly deep playback expression templates for them where possible, it’s NOT EASY (putting information in is, but getting it sound good in a score is a different story), nor are any of them ‘complete’. They’ve been around for decades, and they all have some issues with setting up and implementing interpretive play back as well. There are simple things (some even constitute true bug fixes) we’ve been asking for in Sibelius Soundworld, and Finale Human PlayBack for more than 10 years that simply go ignored.
Third parties have stepped in to help those who don’t care to make their own interpretive maps by offering pretty expensive supplemental libraries and plugin kits, but those did not happen over-night. Dorico will eventually get some third party love at a price as well.
I’d rather Steinberg take their time to a reasonable degree, and do it RIGHT this time. I’m glad they have the place-holders for exclusion groups, velocity and transposition modifers, and range limits in the UI expression-maps (even if they all aren’t yet functional). It just makes sense to we CuBase users. If things are done right, it’ll be fairly easy to port maps between Dorico and CuBase. Over time…we’ll begin to see the full implementation of the expressionmap system. The ‘foundation’ for the playback engine seems to be in place, but it’s critical that the Dorico team really does their homework and deep testing on how to best take advantage of it before releasing stuff to the public.
The note-head editor can actually be considered a prerequisite to much of what the expressionmap system is capable of doing for us. A different shape showing up on a score typically means you’ll be using some alternate technique or style to play the instrument or sing. These may well be prime candidates for making deep and extensive use of an expressionmap.
There will be other types of editors and markings that’ll need to be tied somehow into the interpretive engine as well. So it makes sense if the team wants to get much of it in place on the visual score, well mapped out and understood, before they tie it all to the expressive playback engine.
They’ll need to map out and document some standards, or best practices for how to best build such expression maps (what takes precedence, and how to fall back to the next most reasonable sound if something is missing, etc.), and all that is rather difficult to do until you actually have the stuff showing up on a Dorico Score where you can reason things out and run the tests.
Sure, they probably have flow charts as big as my house explaining the ‘theory’ of how it should all tie together as a finished product…but implementing it all takes time, and has to come in stages. Sometimes the real-world forces adaptations to the theory and the ‘get it done’ schedule (where programmers are given specific projects).
Consider that they need to first:
- Build input and engraving features, for both the score writing and play tab modes.
- Make sure those features are solid, predictable, and consistent (usable).
- Optimize them to be efficient/fast across multiple platforms and hardware setups.
It makes sense that the playback features will come in cycles that are a few stages behind the score making stuff. Since the scoring side of this product is a brand new app from the ground up, and it is massive in scale of what all it is trying to do in such a short amount of time, I believe this dev team needs, and deserves a little wiggle room on the expressionmaps.
Again, I’d rather them take a little longer making the hooks into the playback engine and DO IT RIGHT, than just rush something out here that goes on for decades full of bugs, and a lot of dead-ends for library developers.
If they get this right…it will be very flexible and powerful. Users and library developers will have plenty of options to come up with just about anything imaginable. If they get it right…98% of the things we’d typically ask for in ‘feature requests’, we users will be able to build and share with each other ourselves! It will also be the leading Scoring Engine setting the standard for things VST3 and beyond.