Expression Maps

IMO what we really need (via the scripting interface, to get it visible outside the core program?) is something that simulates NotePerformer’s “intelligence” but without NotePerformer’s kludge of requiring a fixed set of playback parameters for articulations, etc, so it can reverse engineer what the Dorico file contained from the MIDI timing data.

Link that to a top end library, and playback from a notation program might actually start to sound realistic for the firrt time ever :slight_smile:

Thanks Paul for the post. It’s always really useful to hear things from Dorico’s perspective.

In front of me now is an NI KK keyboard. 88 keys, two faders, multiple buttons and 8 knobs with LED screens showing what each knob does. I can load a library have all the controls I need to get the sound I want right in front of me. Many libraries (Spitfire, Project Sam etc.) - have made maps for the instrument. The inevitable question is: why do they make these maps, but not expression maps?

I believe the answer goes to the root of the problem. These libraries are geared towards performance, not programming.

It’s performance that brings them to life and elevates them to sound the way manufacturers intended. Played without expression, without natural variations in dynamics and timing, they never actually leave step 1 of the process - playing the correct sounds. Step 2 is humanising and adding expression. Step 3 is correcting errors and fine tuning. To get the most out of these libraries and deliver a quality result you need to go all the way to Step 3.

Many DAW’s come from the direction of building on live recording (performance) - thereby covering steps 1 and 2 at the same time. Dorico’s note-input and other step-input programs leave you perfectly positioned at the end of step 1.

Yes, it’s possible to fudge step 2 by programming in parameters that try to humanise what you’ve created but a) you need the patience of a saint, b) a serious amount of free time and c) it still won’t sound as realistic as if you’d played it live.

Exactly right.

Hi Witold,
I just wanted to give my Project SAM Orchestral Brass Classic Libarary a try in Dorcio. I just found your post and I would be very interessted in your map.
Heiko

I’ve considered it as a third party venture. Given the current state of score translation technology I’d be embarrassed to charge anyone for the work for quite a few instrument families. I could pour hours into making maps for some of the more popular libraries, but you’d end up having to rework them for each score anyway if you’re after a real high quality mock-up straight out of Dorico. Here’s why.

  1. Different tempos and styles of music can require different things (some instrument families are more complicated than others). While there ARE plans for it all in the future, currently Dorico (nor any other score interpretation software I know of) keeps plugins updated of the current tempo/meter/etc. in real time. This makes it rather difficult to build a single set-up that’s going to sound good with every score.

  2. Even if Dorico supplied that information (meter/tempo) to all the plugins, not all sample library players know what to do with it out of the box (especially NOT those libraries build for orchestral music). The VST3 generation of instruments capable of taking advantage of note expression, tempo, meter, and other information that can be supplied over VST are still in the womb, or are still quite expensive ‘power user’ shells (that at this time still require YOU to build it and/or script it all in as you go).

  3. A list of issues with how the Dorico play-back engine currently behaves. I will not go into all of them here, but I do send my suggestions and opinions to people on the Dorico Team, and they ARE putting ideas on the drawing board and working diligently to address every single issue. It just takes a little time.

Example…
Say you’ve invested in one of the fancy libraries that has all sorts of nice articulations and variations for every instrument. How about bowing styles?

Sometimes you want notes living under a dot to be staccato, sometimes you want spaccato, sometimes martele, sautellie, and so on. Not always, but in general, the choice is made depending on the tempo, and surrounding articulation marks. If the expression maps themsleves could make choices based on tempo…it’d bring older libraries into the 21st century. If that tempo information were simply provided to plugins (and possibly via RNP/NRP midi events too) in real time, one could make much simpler expression maps for ‘smarter instruments’ which can take advantage of all those bundled articulations without the user having to go in and manually tag nearly every note in the piece (or create complicated score specific multi-technique maps)! There are ways to build maps that can send events to instruments and make them quite smart (add things like Bidule, or LUA scripts in HALion)…but then the documentation to use the expression map would be a big book, with lots of fiddly things to accidentally break…and it won’t be universal across different systems either.

So…the end user is still having to go in and redo the maps if they want a good translation that takes advantage of all the bells and whistles of a sample library anyway.

So for the time being…I still believe it’s best to start with a very simple map, fresh for each stave, and build it up as a score develops. It seems to be less confusing, and less time consuming.

Maybe Johnathan Loving will get back in the game

There are many problems with expression maps and Cubase/Dorico’s implementation of them. But more generally, there is the problem with presuming that something like “expression maps” are the right answer for getting from a score to sounds in a VI.

There are two parts to the problem, and I think they should be dealt with separately. The first is conveying what is going on in the score - what notes are being played, with what dynamics and articulations, with what directives in the score. IMO this should be done generically and via MIDI - there should be a universal mapping from the score to 6-12 or so MIDI CCs. If you look at the emap for NotePerformer you will see this is exactly what it does (to the extent it can), mapping to 6 or so MIDI CCs, independent of the target instruments. Then it interprets these CC streams internally for each instrument.

There are many essential things that are not currently conveyed. E.g. it is insufficient to say “legato” for slurs, you need to distinguish between a note starting or ending under a slur (Notion can do this). MIDI is a real time protocol invented for use with live controllers, thus ‘note on’ and ‘note off’ pairs delimit notes. But notation programs know the duration of notes in advance, critical information for choosing the right patch or crafting an expressive arc. The duration should be conveyed somehow, e.g. via a 14-bit CC one could convey note durations to 16 seconds with 1ms accuracy. Notation programs also know whether a note is followed by a rest. This should be conveyed (Notion can also do this). Etc.

The generic system should be smart, and only send the corresponding CC when the situation changes (e.g. mute changes, tasto/norm/pont etc), not sending every CC with every note as emaps do. Short of providing a generic system, Dorico could enable us to build our own by at least making it possible to associate messages with exclusion groups and thus avoid emaps with hundreds or thousands of entries.

FWIW you can build a reasonable facsimile of this generic system using Notion’s rules in fewer than 200 lines of XML, or graphically with their rule editor. By comparison, for those who haven’t looked, an export of NotePerformer’s emap for Dorico is over 45,000 lines of XML! This emap was almost certainly generated by a program, thus belying the myth that emaps are an answer for non-programmers. At the scale needed for orchestration, they are not.

The second part of the problem is - given what is happening in the score, what should be done about it? IMO this should invariably be handled via MIDI-transforming scripts, written in e.g. JavaScript or Lua. Any baked-in rule system will eventually run aground due to lack of expressivity (and Cubase style expression maps are just a very weak form of rule system). While not everyone can wrangle script, enough people can for the community to produce a wealth of library support. In an effort to be accessible to all, emaps end up being inadequate for all. A short script can replace a much longer emap, and accomplish a lot more. Scripts can embed conditionals, can be parameterized etc. A script engine with MIDI capability like Logic’s Scripter can do pretty much anything, turn one kind of controller into another, generate new CCs and keyswitches, move or change the duration and velocity of notes etc. Scripts can also make decisions based upon context, like how fast the notes are being played, or whether a note is being repeated. Of course, logic exactly like this is going on inside NotePerformer.

A great way to deliver this power quickly is to add support for VSTs as MIDI plugins. There are already plugins that can transform MIDI like Blue Cat Plug 'n Script, Plogue Bidule, proto-plug etc. Of course, it would be great for Cubase and Dorico to have something like Logic’s Scripter built in.

As it stands, it seems to me that building upon playback support via emaps for Dorico today is like building on sand. Too many things are not implemented, poorly specified etc. Year after year goes by with incremental playback changes not reflecting a coherent vision. Visual CC and velocity editing forms a kind of escape hatch, but it somewhat reflects a failure to deliver on

Dorico helps you to write music notation, automatically producing printed results of exceptional quality — and plays it back with breathtaking realism.

It plays it back with as much realism as you can manually summon by supplementing the notation with a ton of CC editing, using tools that pale compared to those of DAWs. I’m concerned whether Dorico could ever move to a better playback system once people have become dependent upon various current behaviors (e.g. the global playback parameters, exclusion groups and their prioritization, dynamics mapping or lack thereof, etc). If I invest in building on emaps today how do I know my work won’t be negated once Dorico decides and delivers upon how it ought to work?

Dorico is a fabulous notation system, but it is currently (IMO) an inadequate playback system, saved only by the efforts of NotePerformer. Perhaps the marketing should be dialed back to reflect the reality of the product, not its aspirations.

1 Like

Save your money so you can hire an orchestra. :wink:

…in a very large room, six feet apart.

The reverberations of that decision would be epic! :astonished:

I find myself in the position of perhaps needing to generate a very large expression map for Dorico. Is there documentation for the XML file format somewhere? Even documentation for the section would be useful.

Thanks!

No, there’s no documentation for the file format, I’m afraid. If you have any specific questions, we can try to answer them.

Have you seen this thread:

Unfortunately I don’t run Windows, so this is of limited use to me…

I have yes, thanks. I’m an experienced programmer so I can generate XML myself, given docs. I’ll poke around and ask questions if I get stuck :slight_smile:

I have Dorico 3 elements, tried to install and use the expression map but it didn’t work. Can someone please tell me the detailed steps of how to install and use expression map. Thanks.

Welcome to the forum, Ravindran. You don’t install an expression map, you import it via Play > Expression Maps. Try starting here in the operation manual.

I agree with richhikey in many respects, nevertheless it seems to understate the work that creators of virtual instruments can do themselves. NotePerformer with its lookahead has perhaps got the furthest in intelligent interpretation of the score but VSL for instance has better sound and increasingly sophisticated performance patches which slightly alters notes according to tempo and other criteria. Some randomisation of pitch and timbre is already built in. I would say it is the job of the library builder to create the musical performance and the EM to deal primarily with allocation of the keyswitches. Although a musical performance is at its most sophisticated and spontaneous when done live with a suitable MIDI controller deck, most things can equally well be written into the score.

I wonder if there could be a hybrid approach where NotePerformer uses its intelligence to invoke VSL and other popular libraries in a more sophisticated way than most human users do. I think there could be a lot of money in it for NotePerformer if they could make some of these expensive libraries sound better mostly automatically.

Thanks a lot Daniel, yes I could import it.
However I have another issue, I am not too sure if all the standard sounds that ship with Dorico 3 elements are installed, how do I check this?

Try running Steinberg Library Manager and see what sounds are listed there. You should see several HALion Sonic SE entries, plus HALion Symphonic Orchestra.

Hi Daniel
I checked the Steinberg Library Manager, HALion Symphonic Orchestra is not appearing in there. I ran the Dorico elements 3 sound installer once again still I don’t find the Symphonic Orchestra figuring in the Library Manager. I am enclosing a screenshot of the same
. Kindly tell me if there is something missing.