`IMidiLearn2` and MIDI 2.0

I have some questions about the IMidiLearn2 interface and MIDI 2.0 support.

The new interface does not appear to allow plugins to disambiguate between messages targeting different groups. Is this purely up to the host and user to route groups? or is it logically one group = one event bus?

MIDI 2 declares the following channel voice messages that do not have a corresponding VST3 event and I’m unsure how they would relate to the new IMidiMapping2 interface:

  • Note On/Off with manufacturer or profile specific attributes
  • Per-Note management (detach per-note controllers, set/reset per-note controllers)
  • Program changes with option flags
  • Per-note pitch bend

All of these seem pretty desirable to have as events, especially for synths.

There are also flex data messages which are undefined by VST3, are those going to be added as DataTypes (notably, tempo, metronome, time signature, text messages, lyrics, etc) in DataEvent?

Finally, the new interface requires you to create 2^19 parameters for handling MIDI CCs using the same hack for MIDI 1 - is there a better way? Maybe a UMP event so plugins could choose to receive MIDI data directly and the host can convert (or not)?

There is a proposal for an extension to VST3 that supports MIDI UMP messages (aka ‘real’ MIDI).

1 Like

Hi @mhilgendorf ,
thanks for your interest in this topic.

In VST 3 the equivalent of a MIDI group is an event bus.

This is currently in the works together with the MIDI association. So it’s not yet part of the SDK.

Use VST 3 Note Expression

As far as I know, the options flags is only there to do a bank select or not. Program banks are not supported in VST 3 and you should instead provide a flat list of programs. The host will map the program change message with bank select to the corresponding index of the flat list.

Use VST 3 Note Expression

Most of these events are already present in VST 3. See VST 3 Interfaces: Event Struct Reference.
Tempo, signature and other timing information is provided via the ProcessContext struct.

Why do you want to hide which MIDI CC messages you support in your plug-in to the host? Even MIDI 2.0 propose that MIDI 2.0 devices report their capability via Property Exchange. That’s exactly what IMidiMapping2 is for.

Where is Property Exchange and MIDI CI support in your ‘real’ MIDI proposal?

I really don’t understand why you all want to hide information about the use of MIDI in your plug-ins and why you expect that a host would send you the UMP directly without filtering.

In VST 3 the equivalent of a MIDI group is an event bus.

Got it, thank you!

This is currently in the works together with the MIDI association. So it’s not yet part of the SDK.

Understood, if disappointed. It would be nice to have the data there so we could experiment and update as the standards evolve.

Use VST 3 Note Expression

Maybe I’m missing something - where is the equivalent note expression event for pitch bend and detach/reset per-note controls? Do you need to create a parameter for these? (tricky for detach/reset, because those affect other parameters).

Tempo, signature and other timing information is provided via the ProcessContext struct.

What makes this tricky is that most plugins don’t care about timing changes but because the process context lives for the entire callback, you can’t just send tempo/timing change events and let plugins handle it as they need. You have to split callback buffers, which is fine, but also hits performance even for the common case of plugins that don’t care about it.

Why do you want to hide which MIDI CC messages you support in your plug-in to the host?

It’s desirable for MIDI events and parameter changes to have different semantics. There’s no distinction between parameter automation that should persist (eg: a change to a parameter delivered on the audio thread that should be saved in setState, become a part of presets, etc) versus non-persistent parameter events that are a part of the playback state. Users often use MIDI clips for this.

Essentially, parameter data is a part of the persistent state of a plugin. MIDI events are a part of the live i/o that do not affect that persistent state, just like audio i/o only affects ephemeral state. The workaround to achieve those semantics is to create dummy parameters for each MIDI event that you plan to handle, which is a very large number in MIDI2.

1 Like

Hi Arne,

This proposal addresses the use-case of communicating a music performance to an instrument plugin. i.e. notes and continuous controllers.

Why not the existing Steinberg note events?

These are fine, they do the job nicely and add note-expression support which is a great feature. But most plugin developers target multiple plugin standards, so the prospect of having to spend time and money supporting multiple competing note standards is a lost opportunity, especially now that MIDI 2.0 supports per-note controllers anyhow.
Secondly, MIDI provides a simpler and less burdensome mechanism for supporting continuous controllers (CCs).
There is nothing wrong with VSTs ability to have the host map CCs to parameters for the purpose of automating that parameter, it’s a great feature. But the need to create hundreds a ‘fake’ parameters for the purpose of supporting musical CCs like mod-wheel etc become a real pain-point. Esp. with MPE, and esp when a DAWs presents a confusing list of all those ‘accidental’ parameters to the user.

Why this particular implementation?

Since these musical performance events need to be timestamped and available to to process method, it seems that using the Steinberg Event list is a uncontroversial and straight-forward mechanism for this. I doubt that you yourself would choose any significantly different implementation?

What about MIDI-CI
MIDI capabilities is a good feature. But it is fundamentally different than a musical performance because it is a bi-directional protocol. And also because it is used to configure an instrument prior to the actual musical performance.

The problem with using the Steinberg event list for the purpose of MIDI-CI would be that any reply from the instrument to a MIDI-CI inquiry would be subject to the natural latency of the plugin. i.e. it would be impossible to configure the instrument before the first note-on event (potentially) arrived in the event-list. Therefore, this proposal is not a good ‘fit’ for that use-case.

That’s why this proposal does not deal with MIDI-CI. MIDI-CI, in my opinion requires a different mechanism. One that possible runs off the real time thread as part of the setup of the AudioProcessor (before the IAudioProcessor::setProcessing(true) is called).

Secondly, while MIDI-CI might be a great feature, it is in no way essential. We’ve been getting on fine without for a couple of decades.

I hope that explains my reasoning, I am open to any corrections or improvements.
Jeff

1 Like

In VST3 the plug-in always has to describe what it is able to do. In this case the plug-in has to tell the host that it supports tuning via note expression. Then the host can tune every voice individually over time. This is not a parameter by the way. A parameter is not per note.
Detach and reset is the responsibility of the host to send the correct value for the note expression.
See [3.5.0] Note Expression - VST 3 Developer Portal

That’s indeed a tricky discussion. 99% of plug-ins don’t need sample accurate tempo change information. And even the last 1% work OK with a buffer size of 32.

I’m not sure I can follow you here. If I send a MIDI CC to my Prophet to change the filter frequency and afterwards make a dump of the internal state, this dump contains the change. How do I only set the frequency without it being part of the next dump?
So I think if you want to have non-persistent parameters then you have to create them and don’t store them in the state of the plug-in.

What I’m missing is that when such a feature is included in the specification, the excellent editing experience you have with, for instance, note expression, would be impossible for a host to implement. How should a host know that a plug-in supports per note tuning? How should the host give meaningful feedback to the musician editing a per note controller when there’s no information available.
MIDI UMP is completely missing this stuff, the same as the old MIDI1 byte stream. With MIDI-CI and MIDI property exchange MIDI 2.0 is filling this gab. But all this informational stuff is already in VST 3.
And as you correctly said, that’s part of a bi-directional communication and should not be part of UMP packages. But you should not leave this important bits out when you want to support MIDI 2.0.