extensive sound library expression maps vs. just using Note Performer (opinions sought!)

As a hobbyist, this statement captures something important for me.

Although I want to render in real-life the best audio representation I can make of my ideas and their development, when it comes down to it I need only please myself in the hopes that it will please others. There is quite a gulf between me and practicing or aspiring professionals, who must please those who would pay for their services.

Of course Dorico is a professional software that must please professionals in order to flourish. Professionals are not only paying customers but also influencers. Also, many hobbyists, most much younger than myself, have hopes of eventually becoming professionals. But there are also plenty of long-term hobbyists in this customer base.

If I were a working professional I have not doubt I would be invested in the pursuit of the ideal. As it is I am invested in the pursuit of the best work I can achieve with my talent and resources. It need only be enought that, when I hear it back, I say to myself, “That’s good!”

The upshot is that Dorico must aspire to satisfy the professional, because the more serioius hobbyist wants, if possible, to work with the best tools available. They hear what professionals do, and they want to emulate it.

Exactly – I wish I could put it so elegantly.

And on Rob’s post, I wouldn’t take issue with that at all. But even the best modelled instrument which for me is Pianoteq, doesn’t quite have the depth of sonority of the best sampled instruments. And with orchestral instruments, NotePerformer notwithstanding, we’ve got yet further to go. We can only work with what we have now!

dko22,

You put it succinctly. Alas, my wife, relatives and friends will attest, I do not have that talent.

Couldn’t agree more. I can play a hundred different expressions on my cello that would have no name or technical description at all. I wouldn’t even want to have a sample library trying to catch them all.

Thanks, all for the interesting discussion. I’m also all for more physically modeled instruments - even in my DAW (Logic Pro) I lean heavily on products by Sample Modeling and Audio Modeling since they’re so easy to program (expression pedal for volume/timbral changes plus mod wheel for vibrato, plus tweaking after the fact for anything else) but I think still have impressive realism and are exceedingly easy to play.

This is why I’m so fascinated by NotePerformer as a solution for notation as it bypasses the need to be a computer programmer just to get expressive playback in Dorico. I still wonder, in fact, whether they’re “barking up the wrong tree” trying to fit the DAW (extensive sample library tweaking) approach into a notation program at all as the main focus, and whether they should instead accept that the context is different and try instead to also expand upon the Note Performer approach (?) To this end, I’m actually amazed that no notation software has tried to “buy out” the technology and make it part of their program and work on expanding it, since it’s so much better than the built in sounds of any notation program (at least for more “orchestral” music).

Would also love some ability to directly go back and forth with Cubase (or a DAW like Logic) also and retain the best of both worlds but (as some have commented) this might be wishful thinking for the near future since the two programs underlying architectures are so different (though at least some sort of “Rewire” would be a start!)
Best -

  • D.D.

Having been trying to answer this question for myself, my current conclusion is that NotePerformer, for all its inadequacies is the most efficient model for playback in Dorico. It falls down in some very specific areas. Playback of long notes and legato phrases in the strings can be erratic. Sometimes this is fixed by the addition of a slur, whether or not it’s wanted by the score. The sound of the harps and mallet instruments is woeful - but you could use something else for those if you wanted. Happily, playback of brass in particular may yield better results than some sample libraries. I’ve found that passage requiring very agile playing, often are best represented by NotePerformer. Similarly the winds can be quite convincing. Your best approach kay be a hybrid set up where NptePerformer does the heavy lifting and you use your favorite libraries for specific categories of instrument. The headaches involved with setting up expression maps for your custom template are simply not worth it. In my recent experience, the time it takes to generate expression maps and the clunky implementation, and unpredictable results with complex setups reminds me of where much of the rest of program was 2 or 3 years ago - improving but not yet ready for prime time.

Exactly! Hopefully Dorico has something up their sleeve to improve the ease with which high quality playback is implemented as it feels very difficult right now (as much as I appreciate their attempts to address it all with a Piano Roll, Expression Maps, etc.). I agree that Note Performer seems to have the right idea (now if only NP’s samples could be improved and - in particular for me at least - they offered more appropriate jazz and other non-purely orchestral music playback, etc.).

  • D.D.

And re: adding Rewire to Dorico: barring this, I’ve noticed that Sibelius’s video import allows for the importing of audio files as well (somewhat counterintuitively as with many things with Sibelius, since it’s labelled “video import” :slight_smile:). But just allowing a single stereo audio file to be imported this way into Dorico also would at least save the (slightly tedious) process of attaching an audio file to a static video image, then importing the Video into Dorico (whenever I want to sync in Dorico to something from Logic for which I’ve already imported into Dorico the MIDI) - just as a suggestion to the Dorico team…
Best -

  • D.D.

Yes, and no. We still want/need the pallet of samples, but it’s also true that the modeling for those samples requires constant attention (or at least different templates for style/tempo/timber/intonation/etc.). For the sustained strings example, you basically only ‘need’ about 4 samples, and you can ‘remodel’ them for different phrases (often in real time as the piece plays). Still, you’re going to want a sound shaped for faster aggressive bowing, slower lush bowing, different variances of pressure, routines to handle the tuning scheme being used, and an A/B set for each bow direction.

A good library isn’t just about having a lot of ‘samples’. In fact, they often have dozens of ‘patches/programs’ based on surprisingly few actual ‘samples’. It’s still a viable ‘pallet’ for the studio musician to have a beginning ‘reference point’ to emulate the various aesthetic characteristics of a given instrument throughout a phrase. Once you have the ‘sketch’ in place, one can use the expressive controls to put the polish on it.

I can use the SAME SAMPLE and work up dozens of variations for a sound. ADSR, exciting or filtering specific frequency ranges, tuning characteristics, lfo or looping/enveloped vibrato effects, and more. In the tracking DAW, we often use a lot of channels and just snip the portions we want to use the sound, and place it in a new track pointing there, and then pull up the VST and ‘tweak the thing’ in real time (maybe even using a slider/knob/breath controller/touch tablet screen/anything we want), using our ears, while the DAW is looping.

Case in point…here is a screen shot during a macro building phase where ultimately, I’ve taught HALion to make something like half a dozen bowing variations using the same sustained sample layered up with a remodeled spiccato sample (samples from the the HSO library)…but the dynamics (and sometimes more) are remolded. So, I’m using only using something like 8 or 9 samples to get ALL these variations.

It uses keyswitches to bounce among the bowing variations.

And what it looks like loaded into HALion SE.

Here’s a quick rendering directly from Dorico. Every sample I’ve used is included in Dorico Pro/HSO. The vstsound supplement required to get this running in HALion SE are only around 13mb (most of that is bit-map images, and other information required for the macro page). It only takes a double click to install the supplement.

The primary bowing techniques I’m attempting to mimic here are clarity of when each phrase begins, smooth legato inside the phrases where it’s supposed to be, subtle changes/variations when the bow direction reverse, some degree of controllable vibrato application, and a martele like stroke on most of the inharmonic tones occurring within the first first 2 beats of the measure (dots living under slurs in the score…ta de da). I’ve roughed in a Marcato stroke as well, but it’s pretty bad right now (should be able to come up with something better).

Applied Effects are French Theatre Reverb in the Aux Bus, and Maximser/UV22 16bit Dithering to MP3.

The score I’m attempting to interpret here is Mark Starr’s arrangement of Zortzico

I know, it’s not that great of translation on my part, and it took a while to get it ‘this good’. Oh well…


I did this before we could ‘channel bounce’ and save sound templates! Another added benefit is that I scripted it to delay the legato event so legato slurred phrases make more sense now. Third, I built a ‘cross-fade’ between tutti and solo sounds from the same stave. Now that we can channel bounce and such, I don’t really need to continue the project…I can do most of this with the bog standard HSO library (I side host it in a Bidule instance, and fix the legato pedal issue there).

One can teach Dorico make these adjustments with big lists of CCs in each expression map entry to the same instrument on a single channel, or one can have several copies of the sample loaded, with the ‘choices’ easily available for simple channel bounces. The advantage to the latter is that during the work session, you are able to start the score, isolate phrases, and then manipulate the VST (and/or controller data) while the score is playing (an audition process). Once you get it pretty close, it’s just a matter of bouncing channels as you need that base model for sound.

I agree that modeling is extremely important. Regardless of if you are using 4 samples, or hundreds to shape a phrase, the fact remains that Dorico has to send information about the score to the plugin, and the plugin has to interpret said information to ‘decide’ what to do with it.

Software today pretty much works on the concept of: If an event triggers me, I check all the data attached to the event, and if these conditions are met, I do this, or that to something in memory, if not, I pass it down the line for another ‘if/then’ check…down the line it goes until there is nothing nothing left to check…then it finally takes all the information that had built up down the hill sitting there waiting in memory, and uses it to send the ultimate command that makes a sound (which goes through the whole process yet again, several times over in different stages of sound production before a sound actually comes out of the speakers).

One thing computers don’t do very often is LISTEN to what is coming thorough the speakers, in a real room, while they play, and make constant adjustments (real musicians and conductors do this CONSTANTLY…it’s part of what makes music ‘musical’). So, we must have the machine model based on theoretical sets of rules and make CHOICES. It can get pretty close to what a human might do in terms of what we hear, but humans still usually need to freeze the final performance and make touch-up adjustments and/or override some things.

I think about this a lot, in the sense of trying to crack the code of humanizing music without actually playing it first. At least I think it is a code. Some people would call it the mystery of artistic creation. That part you can’t quite put a solid formula on, but is very important to musicality.

Yep, and an AI that can post analyze adds latency. Sometimes that’s not important, but it can be a serious problem as more and more AI needs are piled on to a given scenario.

For the time being, I’m in full support of doing the best we can with theoretical real time modeling.

The fact that building libraries that are good at that sort of thing will take TIME, I’m also in favor in continuing to beef up that play tab in Dorico, adding enough ‘tracking DAW’ like features for we humans to ‘touch up and polish’ the score interpretation as we see fit.

Until then…true audio shaping software such as Cubase will still be there for us as an optional stage for putting extra detail and polish on a mock-up.

Do you mean programs like Note Performer in terms of it needing to “look ahead” to figure out how to best interpret the score? Couldn’t this be solved by being able to temporarily disable “look ahead” when entering new MIDI performances in realtime (like Logic Pro’s Low Latency Mode button I mentioned previously), and then re-enabling it for playback afterwards? To me, this would be a reasonable trade-off (especially since to achieve this currently I’d taken to simply turning off Midi Thru and using an external piano plug-in to play in realtime - not exactly ideal since the sound doesn’t match the sound I’m trying to enter).

  • D.D.

I think that temporarily disabling look-ahead has been discussed before and it’s not something Arne can currently do. Happy to be corrected if this is not the case.

Has anyone taken a Dorico NotePerformer mockup and made it sound like “real music” in a DAW? Practical examples would be more illustrative than talk, interesting though it has no doubt been to follow the discussion.

Similar, but not really. I think our computers are fast enough now days to handle ‘theoretical’ modeling without much latency. The main reason Note Performer has that designed latency, is because Sibelius, nor Dorico have the VST pins to send information like time signature and tempo, thus keeping it updated with each clock tick. It has to buffer a bit, and ‘analyze’ what’s in the buffer, plus combine it with events sent by the Dorico/Sibelius expression maps, to guess at how to best model the sounds. If Dorcio sent a bit more information as it translates the scores, they could probably cut that latency down quite a bit, thus making it easier to combine NP with other libraries in the same score.

More what was I referring to here is:

Imagine an AI intelligent enough to have mics hooked up, and LISTEN to itself playing, and make adjustments as it plays.

We humans do this constantly when playing in an ensemble. The sound reverberates around and in effect changes how we feel, and how we contribute to the group’s sound. If the euphonium player sitting behind a sax section puts certain inflection into his playing, the sax players may well pick up on it and mimic it, or fight it, or even feel compelled to do something totally different, yet ‘complementary’ of the effect. These things also have a profound effect on overall intonation of a REAL performing group.

The way we currently have affordable mainstream ‘machines’ doing AI…well, even if we could find a way to have the machine hear itself in a real room before reacting to itself…with the way our current mainstream AI software and devices work it’d add considerable amounts of latency…

I’m definitely not a programmer, but I would guess it could be disabled - it’s just that the score interpretation would temporary sound unmusically expressive (but obviously Arne can chime in!) If doing so were a way to avoid latency on realtime MIDI input, it would be a reasonable, temporary trade-off to me…(as it also is using Logic’s Low Latency Mode, which temporarily disables plug-ins)…

  • D.D.

I hope it’s not rude to bump, but in this case I’ve gone back and edited a bit, added images and renderings, etc.
A few have already posted well before I got the edits in.

Here’s the post reference link.

That Google Drive link requires access (just to let you know): https://drive.google.com/file/d/1uygtQW … sp=sharing

Oops, is this better? (also edited above)

Thanks for the upload — I think I get a better idea of where you’re coming from with this Zortzico (quite a nice wee piece by the way). It does look as if you’ve made Halion phrase in a considerably more musical way overall and it’s set in a well-balanced acoustic. But there’s no getting round the fact that this library still sounds often like a barrel organ, particularly near the end, and the shorter notes are particularly artificial. I see trying to mould the timbre as wasted time even for one as talented as yourself – would it not be better trying to start with more sophisticated samples in the first place? My feeling was also that the top line dominates too often, thus not letting the texture fully emerge. Higher notes naturally sound louder than mid-range ones but of course you’ll know all this and it might just be an artistic choice which is fair enough.

I made the rendering a bit too hot as well, a little distortion and unwanted artifacts at the loudest parts of the piece. Nothing a fresh rendering with the main down a bit can’t fix.

No, I have NOT put anything on a scope to balance it out yet. What you hear there was done by ear, mainly to show someone the LUA trick to delay the legato pedal. The native HSO macros were locked down, so I ended up having to do my own. I wanted to learn that process anyway, so I took off with it. While I was in there, I cherry picked some samples (none of which are the default choices of Dorico) and attempted to balance them out quickly, and shape up a few attack styles.

My monitors are too far apart for the distance I sit from them, in a lousy room, tucked back into book shelves that screw up any chance of getting a solid reference…on a PC I built around 2012ish or so. They are small equator montiors, with very little low range at all so I have to kind of guess at it…I’d rather it be too soft in the bottom than cause ‘booming’ or ‘rattling’ on the next user’s system (80% of the things I hear on soundcloud, made with those $800 sample libraries, are BOOMY as heck, and even cause average consumer speakers to CLIP…and almost always go WAY too wet with reverb). I do not have a sub at all.

Going from there to a set of Bose computer speakers, that do quite a bit of sound coloring, also in the same crappy room, with less than ideal placement…it’s pretty balanced. Oh well.

Why not fix my workstation and put the speakers in a better place? Well…lets just say I’m barely allowed to keep a home studio as it is. It MUST be closed in a cabinet, out of sight (even the speakers), when it’s not in use.

As for starting with a ‘better library’. My students and clients don’t have $800 plugins and sample libraries on their rigs. I don’t ‘own’ many myself either (sometimes get to play with borrowed or demo copies, or use someone else’s studio, etc.). So there’s that. If I can send a 20meg supplement along with the score that has it sounding like it on the target user’s system…that’s a big plus for me.

I agree about the mix, but keep in mind that other than my roughed in HALion patches, and simple and short expression map to send slurs/legato pedaling events, and a few key-swithces, this is what Dorico made of it!

If you wanted to bring down the 1st and 2nd violins a bit you’ve got a few options here.

  1. Adjust the dynamic curve showing in the screen shot a couple of posts back.
  2. Pull the faders down.
  3. Set up a graphic or parametric EQ and roll off some frequencies.
  4. Adjust the dynamic curve in Dorico’s playback settings.
  5. Adjust the damping and such in the reverb plugin.
  6. Tweak the terraced dynamics in the score itself.
  7. Make changes in the play tab editor(s).

Other than CCs to bounce between tutti and solo, and one to turn some vibrato effects on/off, I have nothing else customized for expressive data. All the mixer faders are hard set at 50%. All the pans are at center. Only thing in the effect slots are the Steinberg convolution reverb on the aux, on the main is only maximser and dithering (to kind of normalize the rendering). All the fader EQs are disabled.

Yes, there’s a lot more that can be done to make it sound better. That’s a rough template. I probably won’t bother though…people seem to like the default sounds and mixes better anyway.

You can probably tell that the sfz notes are a bit much. Of course they can be toned down a bit.

I ‘put’ it in a barrel on purpose. I felt it fit the piece to have that tight/cramped spacial feeling…like a dance tavern with lots of posts, corners, various objects between the listener and the stage, bodies absorbing a lot of the sound, with a lower ceiling, etc. The samples are raw, dry, close miced, steady, and in tune…a lot one can do with them in terms of staging, or getting them ‘out of the barrel’. It also takes time though, and I spent most of mine making scripts and macros to get that for.