I’ve long thought that the future for NotePerformer was in being able to split the two core functions:
- Synthesized/sampled playback engine, low RAM/CPU usage
- AI shaping/humanization engine more capable than what is done with regular expression maps
And that’s basically what has happened in this release, by extending the engine so it supports third party libraries. I already got the sense that we were near the point where NotePerformer itself could not really sound better without major new recordings or something, so to me this makes a lot of sense and is a very welcome development.
As a composer, there is something further to this I would still like to see, but I understand how difficult it would be to implement, and it may not be practical at all. In some regards this ties in with what @Thurisaz was asking about, but I’m not really going to word it in the same way, or imply in any means that it would be easy, and its probably just not possible at all.
With all of these new features, it is almost possible to use NotePerformer to actually create finished pieces of music with samples. The biggest hurdle in being able to use NotePerformer to actually create a finished, polished product is the lack of control over what it is doing. I really appreciate the new CC110 that was added as that is a huge step in the right direction. However, what would be even better would be if you could see, and override, the shaping that NotePerformer is actually doing within the CC lanes in Dorico play mode. For instance, if NotePerformer was adding a crescendo and diminuendo over a long note, actually seeing that in the CC lane in Dorico. This would require somehow that the shaping details were passed back from NotePerformer to Dorico, something that is not possible right now and something that I can not see as easy by any means. One potential solution that comes in mind would be to split the interpretation engine and the playback engine into two different processes, so that the interpretation engine could be run by Dorico right after the expression map processing so that the information displayed in the CC lanes would result from what NotePerformer is doing in addition to the expression map, rather than only showing what the expression map is doing. Obviously I understand that this would be incredibly difficult if not impossible to actually do in practice - the interpretive engine would have to have its own new plugin format that would have to be agreed to by all notation software developers, all just to provide the ability to see and control what is actually happening. An advantage might be that it may no longer be necessary to do the 1 second play-ahead to see what is coming, as the interpretation engine could look at the entire score and add all of the CCs to be handled real time by the playback engine. The interpretation engine would also still have to communicate very well with the playback engine with all of the crazy wizardry and processing that you are doing.
Even for composers working in DAWs, something that would be AI driven for automatically doing shaping on top of the written dynamics, that a composer could then tweak and adjust the details of manually, could be a huge time-saver. Composers these days working on things like media scores are expected to do more and more of handling what performers would normally do and provide all shaping in all parts, whether for a mockup or for the finished product. I know NP is not and is probably never going to address working in DAWs, but I just mention it as a potential side benefit of separating out the logic of having an advanced interpretive engine separate from the playback engine.
What is being done right now with NP4 is excellent, and I realize that what I talk about above is really pie-in-the-sky kindof stuff and probably not practical. Still, I would love to see something like that one day.