That would be dope
I often create expression maps based on midi channel with multiple patch inside one kontakt
it’s so much faster that looking for the keyswitches etc. I just load the single articulation patch in the same kontakt instance and use an expression map based on midi channels to switch
Probably working on MIDI2.0, which - when the big players like NI etc adapt - should make the whole concept of key switches obsolete.
So fingers crossed this is going to happen in the next few years.
I heard that MIDI 2.0 will have more than 127 velocity layers
I am like … meeh … what will it change for articulations and keyswitches etc ?
We need delay compensation per note or per articulation
That’s not even close to the capabilities of MIDI2.0. Higher resolution is one thing.
The two main features from my perspective are
bi-directional communication. That way you can create a MIDI controller that ask the DAW for the track name and display it on the controller. Yes, some things are possible using NKS for example, but this basically opens up a lot of possibilities. Lose the idea of a controller being a keyboard or some faders. You could for example display the state if a VST on a display and then control the parameters from there - without setting up quick controls!
new MIDI messages
Basically you can now send per note data, i.e., not just velocity and poly pressure (aftertouch) but technically any controller data you like, including pitch bend. You can even just send which articulation this one note should be.
So if you can just assign articulations per node, what’s the point of key switches then?
You’d probably still have to create expression maps to tell Cubase how to display notes in the score editor or whether this is a “direction” or “per note” articulation, but no more setting up key switches.
That would be up to Steinberg, then. If I were to program this, the VST3 would provide an interface for the instruments to send back automation information including delay - something, that CLAP is probably already able to do.
Then it’s up to the DAW developers to have a big enough look-ahead to adjust for note delay
Not a dumb question, yet not possible. The editor stores triggers (key switches) per articulation while Cubase stores them per sound slot. While you can always combine triggers from articulations to form a sound slot, you cannot separate them from a sound slot back to the articulations - at least in some cases it would be ambiguous.
So technically it would be possible for a lot of expression maps but it wouldn’t work all the time and it’s a lot of work to support this feature. Let me think about it when I’m back from vacation.