I just felt like giving some kudos to Steinberg, for breaking ground and move us forward with VST3 technology. Obviously knowning that certain companies would be denying its customers the satisfaction of having access to this new technology, on principal, they still went ahead and bit the bullet. THAT is cool.
Having used Note Expressions with HALion 4 (and some with Sonic as well) for some time now, I have to say that the access to this is absolutely brilliant. When I have some section of notes (i.e. with NE data) I can now take notes from it and reorganize them or even create new sections with the play feel of the individual notes still intact, without needing anything but the key editor. This way, a particular feel of some played notes can be kept and used in different contexts, or even combined with other compatible notes, which sometimes creates a whole new type of athmosphere to a piece that would’ve sounded “just like it’s typically played”.
NE data really enables a new level (or maybe rather a much easier way) of experimenting with how sequences were played, rather than “just” what was played. BIG applause to Steinberg from little me!
As more and more companies release VST3 compliant modules and starts supporting the newer sub-technologies inside VST3, it will get really interesting. There is of course a clash with other hosts not doing individual note data, but at some point one has to realize that it really was an inevitable thing to be created.
A drum sequence for example, benefits an aweful lot from it, since individually created notes (hits) can now have their own supplied data associated and moved with the notes!
I haven’t really played with NE but belatedly watching this video really brought the power of it home to me. It compares an audio sax section translated with VariAudio, once to plain MIDI and once to MIDI with automatically generated Note Expression. Impressive stuff.
I’m just a bedroom hobbyist, so apologies ahead of time for lack of experience/vision behind the following question :
The cool video showed how much better it sounds to extract note expression data along with the MIDI notes. Elektrobolt pointed out one advantage of this is you can bring MIDI controller data to different parts of the song when you cut and paste the MIDI notes.
The question is … if the idea is to cut and paste a part so that the “expression” goes along with the notes - why not just cut and paste the actual audio for the sax itself, instead of converting to and from MIDI (w/ expression data)?
I wish midi devices (hardware) become note based as well, instead of channel based.
I think midi has outlived it’s usefulness, why not adopt USB3 standard as the interface of choice, audio and midi can ride the same topology.