Cubase PC and MIDI 2 - (and maybe AI) what is happening or likely 2026?

Yes, we have waited. Let’s not get into moans.

I gather there are developements for MIDI 2 on Apples. Maybe 2026 for Windows?

MIDI 2 promises:

MIDI Capability Inquiry (MIDI-CI)is a bidirectional communication protocol that allows MIDI 2.0 devices to automatically discover and configure each other’s features, ensuring better compatibility and interoperability. Imagine, true plug and play!

MIDI 2 is backwards compatible.

MIDI 2 brings much finer control of MIDI parameters. In MIDI 1 CCs can only have 128 values.

MIDI 2 brings exponentialy more refinement. MIDI 2 is much more bidirectional.

MIDI 2 has Property Exchange, as I understand it, this is a feature within the MIDI-CI (MIDI Capability Inquiry) specification that allows MIDI 2.0 devices to exchange detailed information with each other, such as preset names, parameter settings, and controller mappings.

MIDI 2 profile configuration is (AI text here is orange for some reason)

It is the process of automatically mapping controls between MIDI 2.0 devices for a specific user scenario, eliminating manual setup. It uses a feature within the MIDI-CI (MIDI Capability Inquiry) specification that allows MIDI 2.0 devices to exchange detailed information with each other, such as preset names, parameter settings, and controller mappings.

It would be good if we curated a thread about developments in hardware, which seems to be lacking until now, and developements in DAW capabilities particularly Cubase 15.

2 Likes

The new Windows MIDI Services (supporting MIDI 2.0) is supposed to roll out early next year at the latest.

1 Like

Thank you paka. Is there any advantage to downloading the preview - currently - for Cubase 15, if you do not yet have any MIDI devices? Are there any Vsts that are MIDI 2 compatible? I understand C15 was ‘compatible’ since Cubase 14 (according to Steinberg), but there were no big waves.

The preview only works with Windows Insider builds, so requires participation in the Insider program and a separate Windows installation.

Very few VSTs that make use of the additional MIDI 2.0 specifications, to my knowledge. Most significant one would be Synthogy’s Ivory 3 (a piano), and I think that only uses the high resolution velocity value.

2 Likes

If you do not have any MIDI devices, there’s no reason to download the preview. The majority of benefits it provides to MIDI 1.0 are only realized when you have some hardware. The MIDI 2.0 bits require software changes in the DAWs once we release the 1.0 version of our MIDI 2.0 SDK.

As paka said, it only works on Windows Insider Canary releases today.

We’re looking to get this in-box in retail Windows in Q1 2026. Originally, it was going to be end of this month, but we found a couple last-minute issues with DJ equipment, and one performance issue with USB 2 / USB 3 SysEx transfer. We’ve fixed both, and have respun the release. However, we do not release anything in December, and January ends up packed, so that pushes us out to end of February.

As a note: MIDI CI can work without any OS changes. It requires that DAWs and the devices implement the functionality. CI itself is just a specially formatted set of SysEx messages.

Pete
Microsoft

5 Likes

Some great answers Pete.

1 Like

As an aside Pete, as a hybrid of AI and MIDI 2 refinement, I should very much hope that there are new, real ways that people can handle the question of emulation of orchestral and other natural instruments.The way we do it now reminds me of those old Paint it by numbers approach. one could get a copy of some fine art, divide it down into basic areas of wash, then reproduce. It looses so much texture. Vibrato for example, drives me crazy. I play horns and keys. Vibrato is so much more. When can we choose reeds in a VST? Never heard a VST horn that comes anywhere near close, it’s lift music. What kind of drummer hits a skin in exactly the same way each time? Answer - a VST drummer - only. AI can do this stuff, it can bring wobble - reality. Instead of sitting in a cafe looking at a postcard of a beach, I want to sit on the beach!

There’s always room for improvement, but the Audio Imperia folks make some great synth and orchestral libraries with lots of variation.

Pete
Microsoft

1 Like

What about multi-client? I thought I had read that even older devices will become multi-client with the new stack but maybe I’m mistaken. It is a fairly common issue where I need to open up multiple apps at the same time that all try to use my MIDI ports and then it becomes a bit of a problem.

Sorry, I wasn’t listing all the features of Windows MIDI Services in this thread. :slight_smile:

If you’re asking if multi-client is useful without any MIDI devices, the answer is: probably not really. The multi-client limitation today is that only one app can access a MIDI 1.0 device. In the new system, the only direct client for the device is our new service, and all apps connect to the service. Therefore, every MIDI connection becomes multi-client-capable without having to have any custom drivers.

We’ve replumbed WinMM MIDI 1.0 and WinRT MIDI 1.0 to go through the new service, so they will get multi-client access to devices as well.

A quick video I did a while back on an older version of the service showing the built-in translation as well as multi-client capabilities

(The MIDI Console tool is part of the downloadable SDK runtime and tools)

Very brief network MIDI 2.0 demo (no sound) in case you’re interested. Also showing multi-client, and the console monitor and console “play notes” feature. REAPER hasn’t been updated for the new API, but it’s able to get the notes, which are being sent out over network MIDI, and then being looped back over Network MIDI and into an available port.

And finally, a walkthrough of an older rev of the Console. There’s also a GUI MIDI Settings app, but I don’t have a video of that handy.

Pete
Microsoft

Oh my apologies, I misread your previous answer. My brain somehow inserted a “2.0” in there where there wasn’t one, and so I read “If you do not have any MIDI 2.0 devices, there’s no reason to download the preview”. That’s why I was asking about the multi-client for MIDI 1.0 devices. Sorry for the confusion!

1 Like

Hi Psyche,

Continuing aside: I have really fussy ears, it’s a curse. I have played in orchestras, brass bands, rock bands jazz bleus etc. I also own about ten or so top orchestral librarioes. I have played in chrnological order, trombome, trumpet, cornet, guitars, flue, saxes, real Hammonds real Rhodes and so many keyboards I can’t count. The current state of sampling NEVER convices my ears.

There is a long discussion which we both understand, here is a couple of quick points. If you hit a drum skin you never get the same oscilloscopic reading back, ever. The placement of the stick, the tension of the skin, the room verb it’s never the same. One might not be able to consciously identify it, but one hears. Every real instrument has it’s character, legato for every instrument that can do it is different. On a trumpet, there is a sound which is a kind of slide between one valve position and another, whith no lipping. You can’t do this on a sax, or a harp. There are literally thousands of reasons why sampling in it’s current state does not work. Staccato in particular, makes me shiver, because it’s such a personal thing and dependent on the setting (think James Brown Horns, think Tchaikowsky Nutcracker) is horrible. Staccato is phrased and has different dynamics - yes you can alter this a bit.

PO: You know about this word? It was cointed by Edward De Bono, the lateral thinking guy. What it is is to entertain the ridiculous as a solution, on the way to a new solution. Briefly, in the scientific method it butts in. Assessment, develop hypothesis, make a method to test that hypothesis, run the test, if solution is not satisfactory, then develop new hypothesis and go around the cycle again. PO is the creative energy of randomness, applied, before the generation of new hypothesis. Po breaks all rules, Cubzase for example might become a dancing duck, the new Queen of Siberia or whatever you wish. Po is choosing a random frame of reference and applying it inappropriately, in order to discover new aspects of it’s target. It’s related ot brainstormi ng but is a more exact, defined tool. Using PO does not mean that you are not embarking on reasoning, if you build a house from lego bricks, there may be a stage where you dismantle stuff and lay it on the floor. Po is an integral part of the scientific method which is often overlooked.

One human failing that can effect a whole industry is “solution-blindness”. By this I mean that if a solution to a problem has already been found to be satisfactory, then thinking around other solutions dries up. When people first realised that Computers could type and print, the early solutions were to emulate paper, but later it was realised that they could do much more than a typwriter - check grammar, add pictures, include hyperlinks.

We now have AI, we have gaming tech, we have superfast CPU computing, we have language models, we have superfast GPUs. We are soon to have photonic computers, which compute about 1000 times faster. The chips are here.

Enough preamble:

Po:

PO: A computer can produce a visit to a concert hall by a conductor composer.

Now imagine a game, not a music device, a game. Uses GPU.

You walk into the room. You are Majestico, you are the great composer and conductor. All around you are musicians that you can select for your project. That’s Yehudi Menuhin sirtting in the corner.
Today though, You have decided that you want a jazz quintet.

You call forth the saxophonists. You have before you Stan Getz, Coltrane, Ben Webster, Parker etc. You feel in a latin mood, you gravitate towards Getz. You get him to play a run or two - a list of arpeggios comes to your convenience, you change his mouthpiece and harden the reed (which gives a softer sound) - now you have something to work with.
You look for a trumpet. You have purchased Gillespie, Chet Baker, Bix Biderbeck, Miles Davis. All the samples from each player have been curated from the net by AI. Well, you think, maybe Chet playing latin would be interesting…. you choose his mute.

Maybe you choose your bass by describing the qualities you desire: Subtle, fewer notes, latino sensitivities, emulating a Surdo. A guy appears to your requirements.

You choose your band.

Next you choose your concert hall. Which place? You get presented by clips of smokey Ronnie Scotts, The Royal Albert Hall, etc. You choose your venue (and its reverb) and you position your people on the stage. You get to choose mics and mic positions - these mics now being given an AI treatment - they are “realer”.
SO now you need music. Well you can simply import your notation, or you can click into a notation app like Dorico and do your business. Dorico can now be talked to. You can for example ask (create me a cuban latin style, 32 bars AABA, Key Db, tempo 85, use some swing and keep it dolce. You may not wish to do this, you may wish to creative your own lines, you also can ask AI to do this. You can write in manually. Dorico now needs less buttons as you can simply say “please accent all the quavers in bar nine and reverse their stems, please move them to voice 2”.

Cubase AI is also available. You can load a staff into C.AI and sample audiences clapping for example, if you so choose. Cubase is now able to talk to you too, it can trawl the internet for samples of clapping according ot your request “clapping in a big hall, with yelps”, or, if you wanted it different, “create a reverb from a bat cave”.
Being game based in it’s approach, one can enter a concert hall, view rooms, view notation, oscilloscopic views, key editors as one pleases.

This is my Po of the future of Cubase and the music industry., once we realise we do not have to stick with current “Charlie Chaplin era” technology and solutions.

People need ot think differently

Z

1 Like

One of my main arguments against these benign uses of AI is that it normalizes its use. AI has the capability to do horrible damage to society (and already is), and we shouldn’t turn a blind eye to that just because it will make certain tasks easier. And that’s before it’s combined with quantum computing. Then there’s the incredible environmental damage that AI data centers are already doing, which is only expected to grow exponentially. So that said, I have a feeling Cubase 14 may be my last version.

5 Likes

I couldn’t agree more. Unfortunatly, it’s too late to stop it - damage control is the best we can hope for.
It’s not about spoiling the party for the people who can’t wait for AI to take over and who think that it comes with creative empowerment. It’s about AI’s implications on culture and the impact on society that worry me deeply.

4 Likes

It is, but the “AI” in there stands for “Advanced Integration” since it was primarily designed to work with Steinberg and Yamaha hardware, it being a version of Cubase LE that can record twice the amount of tracks and has a few more effects and features than LE.

2 Likes

Don’t we kind of need synths and plugins that actually support MIDI 2.0 before we start freaking out about including it?

The biggest thing I personally want from MIDI 2.0 is the orchestral articulation profiles. Some of the other features are less important to me. But it is going to take support by one of the major orchestral plugin vendors and one of the major DAWs before it will start to gain traction. It will probably have to start with a DAW vendor, as an orchestral library vendor will need to be able to use the technology in something.

1 Like

Not really. VST3 already has high-resolution data internally, and VST plugins generally know nothing about the actual MIDI messages being sent. It’s the DAW doing the translation for them.

There are some things that can be added to VSTs, and Steinberg recently added some of that to the VST3 spec.

For physical hardware, it was nice to see all that high-resolution data coming in from my MIDI 2.0 hardware.

Pete
Microsoft

Andrew Mee (Chair of the Technical Standards Board) and I (Chair of the Executive Board) are working together in the MIDI Association to get hardware and software companies synchronized on the profile implementations. Right now, you need everyone to support it at the same time, and there isn’t the cross-company coordination. We’re working to help provide that. We’re going to discuss this more in our Annual General Meeting for MIDI Association members next week.

Pete
Microsoft

7 Likes

Maybe your right, but we cannot stop this, genie and bottle.