Cubase PC and MIDI 2 - (and maybe AI) what is happening or likely 2026?

This all suddenly feels a lot more interesting. The current sampling tech takes midi snapshots and glues them back a bit. This imposes a lot of limitations down the line. Yes we know this technology, yes it should not disappear, but essentially velocity sampling is like taking a rapid visual pictures of something that is moving, then trying to paste these two dimensional objects back together, often mocking up things like vibrato. Real world things you bang (which are surprisingly versatile) things you blow (picture Coltrane his mind his lungs his horn) , and things you twang, have nuances that only play out over time, that move in phrases.
Though a picture of a garden, might fool for a minute as a “real” garden, or even, short clips like Le Prince who filmed the "Roundhay Garden Scene” stutters into wobbly life and gives a “sort of” gist - robbed of it’s glories, present MIDI sampling methods are cardboard cut outs of real sounds, early Charlie Chaplin technology.

. Sure we can bury them, and some folk can get fair resemblences, if they stay within rails, but the ear gets tired quickly and pretty soon it is exposed. AI brings the possibility of microphones with ears. AI can learn to listen. It can create sample banks which include motion and knit them together, not in a cut and paste way.
Yes there is utube hyperbole here, and yes it’s not right yet, but the video below shows that daws can change radically.
One thing I can see as a barrier is input devices. If, for example, one examines the mechanics of orchestral instruments, one finds the transition from note to note widely differs. A “legato” for a trumpet, means to blow a note and then, without starting a new breath or lip articulation, to slide valves until the target note is reached. Some legatos are possible and some are not, at least be these means. A saxophone has no legato, nor does an orchestral harp, then of course the mouth organ it’s almost unavoidable. I think AI can know this and craft phrases with this knowledge. To some extent, the visual human interfaces can compensate for these things, and haptics can improve (real drum skins have billions of sounds), but it is like polishing your shoes to comb your hair.

AI has a new tech called “context aware stem generation”:

When you listen I would advise ignoring the “lift music” qualities. AI will improve this. At the moment, this is like the Cubase on Atari moment.

Anyway, this, is primitive but shows possibilities:

Case in point:

Why would we need MIDI for this? Wouldn’t this all be handled by the plugin standard, in our case VST3?

No. I mean I imagine the relevant MIDI note on parameters will get abstracted to VST3 but it certainly isn’t happening now.

Aren’t profiles in essence just a mapping technique for hardware?

No. MIDI 2.0 Orchestral articulation profiles are designed to replace keyswitches and all other forms of articulation selection for orchestral sound libraries. They define standard id numbers for pretty much every articulation you can think of and they get passed in the note on message as part of the note itself.

Well, that’s mapping then. Do you got any links to sources where this topic is described or explained?

2 Likes

All this is a combination of MIDI 2 and AI. MIDI 2 brings finer calibrations, where previously there was 128 there now can be millions. The key point is that although before one could foreseee development along current lines, now anything musical can develop AI attributes. A bar for instance.

Yes there is bad. But here I am trying to stay away from all that vibe. Everything will change.

A quick AI: Key features that facilitate this “plug-and-play” experience are:

  • Bidirectional Communication: Unlike MIDI 1.0, where communication was unidirectional, MIDI 2.0 devices can send information back and forth. This allows them to query each other’s capabilities.

  • Profiles: Devices can communicate which profiles they support (e.g., a piano controller profile). When connected, they automatically configure themselves to work optimally together, simplifying the setup process for the user.

  • Property Exchange: This mechanism allows devices to exchange detailed information about their settings and controls, ensuring seamless integration without manual mapping.

Any “calibration” mentioned in search results typically refers to:

  • Analog-to-Digital (MIDI to CV) Conversion: Manually adjusting physical potentiometers or using specific software to ensure the correct voltage output from a MIDI-to-CV converter matches the expected pitch in a modular synthesizer. This is a process for older or hybrid analog gear, not an inherent part of the MIDI 2.0 digital specification.

  • Input Device Sensitivity: Calibrating the sensitivity range of specific physical input devices like expression pedals, breath controllers, or velocity-sensitive pads to suit a user’s playing style or the requirements of a specific piece of software.

Thank you for the video mdu charm. That’s a fine example of MIDImanship and a fine piece of music and a fine explanation of MIDI 2 Horizons.
My question is how much skill and tweakings? One of my dreams is, for example, that if a run of eight crotchets is written in by notation, instead of only having the choice of exact value velocities, there can be an intelligent way of “performance mode” where for example, the first and third beat are attributed greater ranges, staccato samples are given a suitable length (sound terrible if they are in wrong tempo environment) for this, that real instrument ideosynchracities are reflected and created, (valve movements, string bounce not soley in a regular way, but with intelligent variance.
Thus we might have a strict MIDI performance, and a “performance mode” attempt by the Sequencer/Notation package with controllable parameters.
I see a move back to the situation where a real composer/conductor, having written something, leave more to the “performer” - a more intelligent MIDI track. This of course can be modified, by an interface, or, ridiculously, by an AI discussion. “try bars eightenn to twenty this time with more cymbal. Yes you may laugh but language models are with us.

Z

One time, in early computing (yeh I was there), when Word Processors were becoming a thing (Locoscript anyone?) , they were largely an emulation of paper. Then people started to realise that a “Word Processor” could spell check, insert todays date, automatic number. Yet if you said that a Word Processor might help you write an essay?….. Never!

…..AI can do this, why not in notes?

Thank you but I feel that has nothing to do with what I asked. Perhaps we are misunderstanding each other?

The MIDI-CI Orchestral profile is a standardized mapping for selection of articulations, yes.

Basically, to my admittedly limited understanding, the main idea is that since the articulations for each note are tied to its MIDI note-on message you can copy a MIDI performance from one orchestral Virtual Instrument following the standard to another, and the output (articulation-wise) should be similar. No need to change keyswitches between VIs.

This of course depends on the articulations available for each VI, and the specific variations each may have (which the standard allows for).

This would hopefully make it easier to use orchestral VIs from different manufacturers in the same work while maintaining the theme. E.g. a solo violin from one manufacturer, and a violin ensemble from another.

The basic MIDI 2 specifications (and the specification for MIDI-CI Profile for Orchestral articulations) is available from the MIDI association, non-commercial membership is free. Site is at midi.org, though the specifications seem to be not available right at this moment.

There’s also a NAMM presentation on the subject from last year available at MIDI association’s YouTube channel, though as it’s a trade show presentation the sound isn’t great and it’s a bit unstructured.

As Yamaha is part of the manufacturer coalition driving the development of MIDI 2.0 forward, one can hope Steinberg is working on this at some level as well.

1 Like

That would be General Midi for articulations.
But unlike GM this is part of the core standard of MIDI?

Here’s the summary and goals from the Orchestral Articulation Profile v1.0 specification available at midi.org.

2.1 Executive Summary
There are many orchestral sample libraries in the market, and they are essential for film scoring, game audio, studio, and live MIDI applications. These orchestral libraries have many kinds of articulations.

For example, a string library might have a different set of samples for every articulation including marcato, staccato, pizzicato, etc.

However, there is no industry standard method-the method for selecting these different articulations has been different for each developer. Many developers use notes at the lower end of the MIDI note range for “key switching”, but the actual keys used are different between different developers. Some developers use CC messages to switch between articulations, but again there is no industry wide consistency. Some plugin formats now have the ability for per note selection of articulations, but again the method for inputting that data is different
for different applications.

It is the goal of the MIDI-CI Profile for Note On Selection of Orchestral Articulation to provide a consistent way to encode articulation information directly in the MIDI 2.0 Note On message, using the Attribute Type and Attribute Data fields.

In arriving at this Profile, a study was made of orchestral instrument families, choir, big band instruments, guitar, keyboard instruments, and various non-western instruments to evaluate the degree to which they share common performance attributes and sound production techniques. Notation symbols and performance indications were also considered to determine, for example, how successfully a violin note marked with a trill might result in a
musically meaningful or analogous articulation when the part is copied to an instrument as far afield as timpani— all without the composer having to re-articulate the timpani part, at least initially.

The Profile provides a comprehensive yet concise system of articulation mapping that includes a wide palette of articulation types and supports articulation equivalence across eight instrument categories.

The Profile was designed to offer articulation equivalence — a system of articulation mapping that allows a passage articulated for one instrument to be copied to another track and played back with an equivalent or analogous articulation, regardless of the target instrument type.

When implemented by sample library developers, the Profile will greatly aid composers in highly significant ways.

First, it will simplify the process of substituting or layering sounds from the same or different sample libraries; Second, it will allow composers to quickly audition and orchestrate unison passages by copying an articulated part to other tracks and hear them to play back with equivalent or analogous articulations.

2.2 Goals
This Profile specification addresses the following goals to benefit MIDI product designers and define mechanisms which benefit of MIDI users. The specification addresses the following goals:

  1. Defines standardized mechanisms, commonly usable by all MIDI Devices and sound libraries, for Notes to be tagged with the most common types of musical articulations.
  2. Swappable Libraries/Devices – This allows a musician to enter articulations for individual notes using one sound library or MIDI device and then later switch to a different sound library or MIDI device. In making that switch, the articulations remain musically useful.
    For example, a musician may create articulations for a violin sound from one library and then easily hear those notes with the same articulations on a violin from a separate sound library.
  3. Swappable Instrument Types – This allows a musician to enter articulations for individual notes for one instrument type and then later switch to a different instrument type. In making that switch, the articulations remain musically useful.
    For example, a musician may create articulations for a violin sound and then easily hear those notes with the same articulations on a clarinet.
  4. Atomic Message – Note Articulations are defined as an integral property of a MIDI 2.0 Note On message. Then if a sequence of Notes is edited in a sequencer or DAW application, the articulation remains fixed and attached to the Note, whether moved in time or transposition.
  5. Autoconfiguration – The MIDI-CI Profile mechanisms allow Devices to discover whether these Profile mechanisms are supported by a MIDI Device, helping users to configure and use devices which conform to the Profile.
3 Likes

Once again, to the best of my understanding.

Three of the four core documents of MIDI 2.0 standard concern the MIDI-CI profile standardization. The fourth document is the actual MIDI 2.0 protocol specification.

CI stands for Capability Inquiry, which means that it allows the devices to communicate and determine whether they both support a specific MIDI CI Profile. So just because a manufacturer advertises a device saying it supports e.g. MIDI 2.0 Note-On high resolution velocity data (which is in the Protocol specification), doesn’t mean that it necessarily supports all the additional Note-On attribute data as defined in the various CI profiles (e.g. articulations in Orchestral profile, or string & sound board resonance in Piano profile).

2 Likes

The video that I linked to showed how that might work in the particular case of a MIDI 1.0 hardware controller going to a MIDI 2.0 capable DAW that supported orchestral articulation profiles with a MIDI 2.0 plugin that supported orchestral articulation profiles. The DAW itself would provide a means of articulation selection for the MIDI 1.0 keyboard that it did support, like keyswitches or something of the like, but then it would actually insert the orchestral articulation profile into the note-on messages that get sent to the plugin and remove that keyswitch. This is similar to the way Cubase Expression Maps work today, with remote keys defining the keyswitches that the user can use to trigger specific map slots - the difference being how those are communicated to the plugin. Currently those are turned into other keyswitches or CC’s by the map which go to the plugin itself, and in the future those should be handled by the new attribute data of the note-on messages.

This is going to have interesting repercussions for GUI design of instrument plugins because currently a large swath of the GUI is often taken up by articulation selection stuff (for configuring keyswitches) and that can now mostly disappear. Having the articulation in the note-on means of course that you could theoretically trigger many notes on the same channel simultaneously that all have different articulations, so I’m not sure it would make sense to show the “current articulation” in the sample player in the same way as today. You could layer, say, legato and staccato in unison on the same MIDI channel.

Thanks. So I got it right. It is GM for articulations.
Do you midi.org guys plan on creating a logo for this? I can imagine that library retailers will want to show customers that a product supports this standard.

1 Like

Part of it is certainly like a GM for articulations (assigning standardized numbers to techniques), but giving numbers to the techniques would be useless without an actual system to communicate them over. GM can simply refer to the existing “program change” system in MIDI and use that, but this standard has to define a similar system for articulations, in this case happening in the note-on message. So it would be more accurate to say it would be like program changes and GM but for articulations.

The system for articulations also has some special regions where manufacturers can add their own custom IDs that are not really covered by anything else, in the same way that GM-complaint devices could add the ability to use bank changes for custom sounds instead of being limited to only providing the GM set.

1 Like