It’s my term, Dorico AI “Performance Mode” (PM). I consulted my crystal ball.
Imagine we have two basic modes of playback, first basic MIDI Based playback and then an AI Perfomance Mode. Essentially this is when AI parses some MIDI input and “conditions it” to sound like the real thing. A Quick example, take a MIDI trumpet. On the real thing the transition between one legato note and the next is conditioned by the continious blowing over a set of notes and the nature of valve transitions. These are ideosyncratic, indeed the term legato means widely different things to each instrument, subject to it’s mechanics. Some instruments cannot do it, a triangle or a saxophone would be examples.
Consider a trumpet “hat” mute, the way the hand moves, musically, to craft notes is not easily replicated by samples. AI can replicate this.
Staccatos are another example. I was surpised to find that there are whole ranges of staccatos, on each instrument, when I try the generic Orchestral Package X samples they often end up not suiting the tempo beibg either too long or too short having too much or too little attack.
Take another example. A guitar playing open chords. In real instruments the notes that ring out and sustain are controlled by teh mechanics of the fingers on the board, often notes are left to ring, sometimes they are deliberately dampened. As one moves up the neck the tonality changes not just velocity.
Now we can buy AI guitars and trumpets, but if Dorico had a performance mode which applied intelligent processes to the way a MIDI track is delivered contouring it for example for a Purcellian non vibrato performance, or a Chet Baker cool drool, this would save so much unecessary work. Instead of composers being engineering keysweitch warriors we can concentrate (more so) on the performance.
More than this, AI could do things like take real notice of markings like sostenuto or dolce or it can give better authenticity to beats and crescendos.
Before anyone points out that this is taking away creativity, I want to point out that a real composer, even if they are on a podium conductiing their own work, hands over control of teh actually delivery to the brains of the performers. I think AI wil take us back to this, though I hope we will retain the ability to make fineer edits.
There is no reason why an AI language model could not be used so that the composer could make remarks like give more accent to the first beats, of an orchestra piece, or “play with more vivace”, or “bring back the brass”.
This is my idea of performance mode. A mode of Dorico which intelligently parses each instrument so that the sound is more authentic than a basic MIDI rendition. A mode where via a language model, we can control other parameters too, of the whole piece at a time.
This is beyond the notion of just AI instruments. It applies to the Whole performance.
Z
Here are some great examples of AI hugely improving over basic MIDI. The software is very primitive, but the direction is undeniable