extensive sound library expression maps vs. just using Note Performer (opinions sought!)

I agree that Dorcio is quite good, and getting better with each release. When looking at other ‘scoring’ software…it truly holds it own.

No way, it’s far from klunky, and it’s SUPER user friendly. There isn’t a single aspect of the key editor UI that you cannot remote control, key-bind, and more. It is LOADED with power user tools (requires reading the manual, as well as primers on MIDI and VST protocols though). It makes quick work of quantizing/humanzing/overlapping. It’s loaded with power tools for selecting things based on conditions, micro-sliding, working with grids, and so much more. You get dozens of ways to deal with velocity and controller data…including double clicking a note and drawing it right into a graph there in NE format. I can’t think of a single MIDI key-editor on the market that comes anywhere close to the power and efficiency of the one in all versions of Cubase, from version 5ish, through present day 10.5.

If they start dumbing it down, and filling up the screen with idiot boxes and taking away the 30+ year old tried and true UI, it’ll be the day I stop upgrading and stick with legacy options. Cubase is a solid leader when it comes to working with anything MIDI or VST, bar none. I would however like to see improvements to the logic editor. The UI is perfect as it is, just add more types of events that it can work with (such as score editor symbols). Add the ability to easily generate arbitrary events. Make a few minor improvements to the expression map interpreter. But…please, please, please, leave the UI alone for the most part, and make any changes to it (optional).

See…I’d rather not have to waste half a day drawing in hair pins and hiding them (it’s a characteristic of March Trios…the hair pins are not wanted on such scores, but we do want the effect on play back for the melody and counter melodies).

In Cubase I select the range of events in the part I’d like apply this effect, and make a simple logic editor (or just call it up since I already have it made):

If event == note
then insert a CC1 (or 11, or whatever) with a value equal to the MIDI note number.

Optionally I might also copy the MIDI note number over to note-on velocity.

Poof, in less than a second it’s done.

Next I run another pass that scales each octave into a common range. Again, taking less than a minute.

Next, I can pull up a CC lane in the in-place key editor and easily do batch scaling and sliding with lasso/drag tricks (or opt for more passes with the logical editors).

For things I find myself doing often, I can build macros that combine as many project and track logical editors as I like into a single command.

Note, Dorico does have scripting abilities in the works…which might eventually make this batch processing thing a mute point, but for the time being, for precision work in a fraction of the time, the tracking DAW is still king.

Also, in the Cubase track inspectors you can ‘stack’ humanizing or quantizing effects, plus doing your destructive edits with the logical editor. Plus easily micro slide any individual, or groups of events in time VERY easily. You can instantly do length humanization/quantization. Force instant note overlaps, and so much more.

Doubling parts/building layers, etc…very fast and intuitive in the tracking DAW. Add our custom user track presets and such to the mix, and it’s often no more than a simple key combo and a click or two…and poof…I’m doubling the horns, or building a textural layer for the strings, and more.

I agree, it’s already in or above league with other ‘scoring software’, and it will improve with each release, but for now, it is a LONG LONG LONG way from even remotely matching the power and flexibility of CuBase.

I’m optimistic that someday Dorico’s play tab will make it a true one stop shop for working with, and mixing down virtual instruments. It’s just not there yet, and it ‘seems’ to me to be a few years away from being powerful enough to replace the DAW in terms of ease and speed in sound shaping tasks.

Again, that’s part of the problem. We get used to sitting in front of a machine and hearing what it spits back at us. It’s often WRONG if a realistic mock-up is the goal. Before doing a mock-up, it’s a good idea to force yourself NOT to listen to ANY computer generated music anymore than you MUST. Take LOTS of breaks in a TOTALLY different room from your rig, and only work on the project in chunks of about 15 minutes at the time. Listen to real musicians playing in real rooms. Heck, even set up some mics and just record the room when it’s totally empty…and work that into the mix later. SOMETIMES we even resort to tricks like playing the mock-up in a real room, and recording that with mics (especially brass tracks).

It’s easy to fall into a habit of sitting in front of those monitors and grinding for hours at a time…playing it over, and over, and over. We learn to ‘like’ things that should not be there, and stop missing the things that should be, but aren’t there.

The room helps shape a realistic groove, and groove is more than just tempo change.
There are three major dead give aways that a score is being played back by a computer and not real musicians.

  1. Unless you jump through extra hoops, everything is more often than not tuned to bloody equal temperament. It’s annoying, and unrealistic. Orchestras rarely play in equal temperament…period…and there is usually a bit of detectable tug of war going on in the group where tuning and locking chords is concerned. A good mock up ditches equal temperament unless it makes sense to have it there. It’s MUCH easier to pull off different tuning schemes, and nuances in pitch shifting using a tracking DAW. Yes, it’s possible with a lot of extra staves and such, and the CC lanes in something like Dorico…but in my experience, the entire tuning scheme thing is multitudes easier in Cubase.

  2. It often sounds like a robot. The tempo is ‘too perfect’. The random humanization can be a help, but it’s going to be a little different every time you play the score. Sometimes you want that effect when doing a mock-up, but sometimes you really need it to be static…so you can control and forge it as required.

  3. The groove and spacial delays for the various instruments and sections do not fit the chosen reverb settings for the virtual room. Again, when in composing mode, we sit there with the score for HOURS, and learn to ‘like’ things that are pretty ‘bad’ in terms of realism. It’s often to perfect, and too machine like…without natural variations that meld in ways similar to musicians sitting on an actual stage or in an actual rehearsal hall.

And in my experience, that’s the most important element to getting a good mock up. Mixing is 90% of the battle. How and where to toss a notch filter, or when to give a little compression to given frequency band, how to isolate and place things in the sound field, etc…

Again, Dorico already has one of, if not THE BEST mixing console out of the ‘scoring software’ currently on the market. Still, the mid-range or high end DAW is light years ahead in all aspects mixing/processing.

I agree that this is the ultimate goal, and every few months developers, and users are making significant contributions towards this becoming a reality. As a collaborative tool, it’s more than amazing…great technology.

It still has a long way to go to be as efficient at shaping sounds as a user controlled DAW though…

I’ll say it one more time. Every year more and more scores played exclusively by computers hit the streets. Most of them are pretty BAD, and sound terribly ‘fake’ (even when the instrument libraries are high quality…the reverbs/mixing/groove-feel/dynamics, and most importantly THE TUNING is a lazy mess. None the less, people are getting ‘used to it’.

To me it is not so much a question of the relative benefits of Cubase vs. Dorico. Of course Cubase beats Dorico hands down as recording, mixing and playback platform. But for me it is more a matter of which type of platform do I feel more comfortable working on composition, a DAW piano roll or notation. In this regard notation wins hands down…for me, not necessarily for others. Therefore for me as Dorico has over the last several versions begun to come into it’s own as a composing platform using VST’s, it has caused me to increasingly leave Cubase behind.

As others have said above, the last few Cubase versions have been oriented to a different market than orchestral composers and arrangers. I stopped upgrading after Cubase 9.5, but I’ve purchased every Dorico upgrade. I plan to continue this path as the platform matures.

Obviously, Brian and I see many things (though not all) rather differently. This is partly because he has much more detailed knowledge of DAWS than I do and knows how to get the best out of them but it may also simply be that we are looking for rather different things from our music. First of all is the actual process of composing where DaddyO really hits the nail on the head. My ability to compose immediately took a considerable leap forward the minute I could use notation software to do it because I could see visually how the individual lines flowed in and out so it became much easier to write contrapuntally in a semi-competent manner. It’s possible to see immediately in an orchestral setting how various instrument combinations can sound together, assuming a decent VST.

On mixing being 90% of the battle, I see no evidence for this with the best modern libraries. With Sibelius, I did use Cubase’s mixer for modest adjustments to sound but as Dorico has the main essentials built in, this is barely necessary any more. How to isolate things or best place in the field has far more to do with good orchestration and a decent supplied acoustic – here the VSL Synchron stage player is close to outstanding in my view. As for shaping sounds, this is mainly down to the recorded samples. Of course if you’re using Halion or other basic libraries which do little shaping then it’s pretty difficult but why make things hard for yourself?

Pitch and rubato are two other things which have quite correctly been mentioned. Advanced libraries have random pitch imperfections built in so why would we want to waste time creating them with a DAW unless there are very specific requirements? Rubato has been supported in notation software packages for quite a while. Ok, the quality is open to debate but will continue to improve, particularly when the tools can be applied to selected passages. As two live performances are never alike, then why should computer ones be? Where tempo changes are explicitly required, they can of course be marked and exactly calibrated in the score.

On this March Trio thing, we’re talking about a) dynamics and b) patch change if we really want to completely change an attack. Dorico’s Note Length feature could easily be programmed to use a particular switch for notes of a certain length. Assuming you really want to use a specific cresc/dim combination for different notes (which to me sounds rather crude) then you can make it in one and then duplicate it in the automation lane. However, you can’t do all the parts at once and Note Length programming cannot yet be used for selected sections only. So yes, of course there are many things which can still be more flexible and faster with a DAW even within my sort of composing environment-- I doubt anyone disputes that!

I would agree that the majority of scores created by composers don’t sound that great. In general this is a mixture of lack of expertise in the composer of how to use the libraries or simply using ones which can’t by their nature sound decent in the first place. However, we sometimes forget that only first class musicians can really make music come to life as well – others can be struggling to play the piece at all.

No, we’re not at odds on initial workflow choices.

When doing through composed music, I do 99% of the composition in a Score Editor as well. Which one I fire up first depends on the client, or collaborative group’s demands of course.

More often than not, I don’t even need a high quality mock-up. I’ll get the score playing the best I can in what comes natively with the target user’s system (typically using what comes with Dorico/Sibelius/Finale/MuseScore/Etc., in the case of Dorico perhaps with some of my own HALion SE vstsound additions that the user can just double click to install real quickly). It’s going to a conductor, or to an individual playing musician, who will decide how s/he wants to interpret and play it.

It’s pretty rare I pull it into the DAW until the score is done (or mostly done), and then only if extra/intensive sound shaping that I can’t do natively and efficiently in the initial workflow is required. In that case, I just save the slate of sounds required in each plugin, export the MIDI tracks, pull it all into the DAW, and get to work ‘sound shaping’.

As for those libraries with many gigs of ‘sample choices’…no, those don’t really cut it for me in any out of the box ‘automated’ form yet. I’ve looked into building some of my own expression maps for nice libraries, and there are just too many variations required across different scores/tempos/etc. to have a single plug and play map. No scoring package on the market sends the information required to the plugins yet (time signature, tempo, key signature, etc.) to script up a library that can decide on its own to shift tuning schemes on a key change, if it should pick martele, or sautellie based on the marks on the score…and it takes all kind of time to poke them into a score manually, or sort out the logic in expression maps on a score by score basis to get it right…force overrides…etc. Spreading it out in a tracking DAW, in contrast, make it pretty simple…and there is visual order to the game plan. And don’t get me started on the nightmare of dealing with legato phrases via expression maps.

Interestingly enough, HALion 6 has the capabilities to build some very smart instruments that could get a lot of this correct on the fly, and in real time…but the signals the plugin needs to get that information aren’t hooked up yet in any premier scoring package I know of.

With the monster libraries that have a fresh sample for every attack and release style…One still has to relentlessly audition all the options, and manually plug them into expression maps. Few of them are the correct length for contexts, and they don’t include a standard for time stretching/shrinking them in real time. They STILL require tweaks (often even outright resampling them, and pasting the results into the mix on an audio track). They are still tempo/style dependent. They still need micro slides on the time-line to fit the chosen virtual room and seating arrangement (plus to offset cancellation effects inherent in sampled music and loud speakers), and often even require quite a bit of custom tuning throughout the passages. They still require user intervention for sound staging and ‘mixing’.

Cause and effect variations relative to a group of performing musicians aren’t just ‘random’. There are physiological, and physical reasons for them. Instruments respond to temperature, pressure, etc…and musicians adjust as they play. Different instruments have specific tuning characteristics that are important to address in a high quality mock-up.

Most of these things ARE POSSIBLE to achieve in Dorico, but with a lot more time consuming manual labor at this time. It’s labor intensive in the tracking DAW as well; however, the DAW has tools to automate redundant sound shaping tasks, and build up a personal library of macros/presets/etc, to brush up and reshape every aspect of the virtual ‘performance’…from almost any level from the point a sound is generated to the point it hits the speakers…all spread out over as many monitors as you can afford to plug into the PC. In the DAW, we stop being a composer/musician, and shift into the role of the audio engineer. That’s the point where at this time, it’s still a major time saver to move the project into sound engineering software.

Actually, this paragraph does nicely illustrate what we’re still lacking in notation software. I guess that I have found it worth spending the time on Expression Maps with the aim of learning how best to utilize what we have and you already know many of the tricks for doing it in a DAW. But I really can’t say I’ve found anything seriously missing with the string quintet I posted elsewhere for instance which requires a DAW. No, not even the question of legato phrasing which is a real, though imo much exaggerated issue.

I don’t really think we do really have differing views in any radical way – simply differing areas of experience and perhaps also differing aesthetics and tastes.

It’s enough to make you want to go out and hire an orchestra!

This gets me back to the other parallel discussion I first raised in this thread re: Note Performer. What I’m finding is that after trying to program my “better” virtual instruments in Dorico, I get bogged down in the technical part of building appropriate expression maps, programming appropriate sound choices, etc. and it just feels easier to do it in a DAW to get the expression and realism I need (as Brian has also suggested). The only exception to this I’ve found is to simply use NotePerformer in Dorico for playback which - while sacrificing some depth vs. the deeper libraries, etc. - offers the advantage of effective, expressive interpretation of the score automatically based upon the notated articulations and dynamics.

So will it ever be fully “worth it” to extensively program all one’s virtual instruments in Dorico? Is it worth going down the “rabbit hole”? Or should Dorico perhaps be trying to find a way to expand the NotePerformer concept instead, somehow? (licensing Arne’s technology but building richer “built-in” libraries, expanding to other musical styles like jazz more effectively, etc.?) Again just curious what people think.
Thanks -

  • D.D.

it often seems to come down to whether one wants to invest more time in learning how to programme a VST or in becoming an expert in the many features which a DAW can offer. With a VST, much time will inevitably be spent learning and trying out all the available articulations to see which are the most valuable and how they work together. Until this is done, there’s no point in even trying to write Expression Maps. I can’t see how a DAW can be a substitute for properly learning your instrument but then I’ve already admitted that I don’t know by far all the features with modern DAWs.

NotePerformer, which was of course the original subject of this thread, is indeed a possible model for the future but currently has two major weaknesses – the read-ahead which precludes even step time input being used properly among other things and the quality of the sound which is generally too crude for chamber music (which is irrelevant for those who don’t write any). Of course there’s nothing to suggest that both issues cannot be overcome with the onward march of technology. NotePerformer brings a kind of vitality which was often missing in the sample world and the arguments that sampled libraries are simply too sterile to work without a lot of manipulation still have some merit. Only some, though, as leaders like VSL have ever more scripting and “intelligence” built in. The most prominent all-in-one competitor these days, the Spitfire BBC Symphony, prides itself on the organic feel of its output and yet NotePerfomer seems still unrivalled for biting rhythmical drive in certain contexts.

Ideally we all want to just type standard instructions into the score and let it then just get on with finding the correct patches or modelled algorithms to produce human-like output. Until this state is reached, which could be decades away, we simply have to decide which compromises are acceptable for the kind of music we’re trying to produce.

This has led me over time to lean further and further on modeled instruments and highly expressive controllers (wind controllers). The burden of dealing with massive systemic programming is, in the end, not worth it for me, particularly when I can “play” a modeled instrument with a small set of switched articulations and get far more human results. If Dorico would allow recording live performance data for exisiting passages that would make my workflow much easier and eliminate the need for a DAW. Example:

  1. Write a passage in Dorico Write mode
  2. Switch to Play mode
  3. Enable recording for 1 or more CCs of a given track and record live performance with breath controller
  4. Bonus: enable recording for note start/end bound to a single key (rhythmic values only, no pitch data) and allow tapping in a human performance rhythm without changing pitch and notation.

As I continue my post-3.5 efforts to finally implement VSL in Dorico, I find myself listening to the results with satisfaction in what I have accomplished so far. But the effort is daunting and the results, though the quality of the instrument sounds is better than Noteperformer, are in some ways a step back.

I simply have to compose in notation, but I’m beginning consider whether the best solution for me might be to have Dorico with Noteperformer for experimentation and composition on my left display and Cubase on my right for a better audition. Notes on the left, play it in on the right. Make a change on the left, make a change on the right. This way I’m not fighting Dorico to make it a DAW, and I’m not lost compositionally in Cubase’s piano roll environment.

Many others have expressed the strong desire for more integration between Dorico and Cubase. To me the idea of transferring MIDI or XML files assumes one is the kind of composer who hears everything and just notates it. That’s not me. I have to try an initial idea in notation, then audition it, then make changes because what I hear is not quite what I want. It’s a constant, repetitive cycle for me.

Perhaps for me the solution is not export and import, it’s a simultaneous process with both Dorico and Cubase at the same time, using each for it’s strengths.

I have no idea if this would work, but I’m thinking about it. I left Cubase behind with version 9.5, but 10 introduced high-res support, and since I now have two 4k displays that are identical (I didn’t have a single one three months ago) I might want to upgrade if I decide to try it out.

Comments appreciated.

It would be nice if one could retain the best of both worlds by having a DAW (hopefully not just Cubase but Logic as well :slight_smile:) and Dorico directly “talk” to each other via some sort of enhanced, super-charged, new “rewire” protocol, etc. but where changing something in one program affects the other in some way (and I know another thread has also mentioned the idea of a Dorico “plug-in” for notation). This way those of us used to micro-tweaking a mock-up in a DAW can continue to do so, but connect it to Dorico for notation. If this was offered I might even consider switching to Cubase (even as a long-time Logic user) if needed :slight_smile:

Barring this, I DO think that NotePerformer feels somehow like the “future” of notation playback if there were some way to improve the depth of sound playback and the diversity of algorithms beyond more classically-oriented scores. It’s just so “idiot-proof” to use and is such an excellent trade-off vs. the time I’m finding it’s taking to try and program Dorico with my other virtual instruments. I’d be curious if Dorico might have plans down the line to integrate some sort of automatic expression with an enhanced built-in library similar to what Note Performer has done, since many who use notation don’t necessarily have the computer-programmer mindset (or patience!) to tweak our libraries along the lines of what’s lately been suggested, Expression Map-wise, etc., which requires some serious “down in the weeds” thinking.

As far as the NotePerformer “look ahead” feature making it hard to do step or realtime MIDI input, maybe there would be some way to temporarily turn off the “look ahead score interpretation” when you’re recording/entering something in, similar to the “Low Latency Mode” button in Logic Pro (where invoking it turns off all plugins, reducing system latency when you record but - of course - also temporarily bypassing the overall sound of your mix until you turn it off again).

  • D.D.

Once these signals are connected, more instruments will come about that can better sort options automatically.

  1. Time Singature
  2. Key Signature
  3. Tempo/bpm
  4. Anything else helpful that the VST protocol has to offer.

Note Performer won’t need the delay anymore to attempt to ‘calculate’ these things on its own.

HALion, and other sound engines with scripting abilities can use that data to make more, and better choices out of the box.


For a bowed string player…sometimes we want a dot over a note to mean staccato, sometimes it should mean martele, and so on. In general, tempo is what a musician uses to choose what that dot means. Often the time signature can play a role in the better articulation for that dot. Sometimes velocity or level of aggression comes into play. If the dot also lives under a slur it can give yet more hints, Etc. Expression maps can also stack even more clues to help the plugin’s scripts make smarter choices, through attaching events to instructions like Dolche, Rubato, and so forth.

The same goes for using alternate tuning schemes. If one wants to use just tuning, then the plugin needs to know the key signature so it can make adjustments if that changes. Yes, there are ‘work-arounds’ to use CCs and what not to convince a plugin to bounce to a channel that has the proper tuning set and ready…but the way expression maps work…we end up having to duplicate a LOT of entries in our expression maps, but with that one extra node (a good xml editor can help automate some of this, but even so, it’s a lot of work).


Once those VST pins are wired, and Dorico keeps that information updated in all the plugins as a score plays, library developers can use the scripting abilities of their plugins, and get to work sorting and building intelligent expression maps and marry those to scripts in the plugin.

It’ll take time though! Making scripts to rough in the choices might be relatively simple for someone who is quite intimate with the library and knows the samples and parameters provided inside and out, but running lots of scores though it, for various styles/tempos/grooves and tweaking it out will take loads of time. Note Performer has been working on theirs for many many years now. It didn’t just happen over night.

A DAW doesn’t negate the need to become very familiar with your plugins, at every level possible. Especially if you plan to invest in the more expensive higher end libraries. This is something you’ll need to do regardless of the host you plan to use it in. The DAW just makes it easier to get at all your plugins bells and whistles, and to SEE what you’re doing…stretched out before you on the timeline.

Keep in mind that you can do alot of these things in Dorico by using multiple staves, channel bouncing around in plugin instances, and so forth, but it doesn’t look much like a score anymore…it gets cluttered FAST (a tracking DAW can get cluttered as well, but it has features to easily show/hide, swap focus, tuck things out of the way…shift it to folders/groups/etc…assign different colors to anything on the screen…store and swap between many visual or page modes at any time…spread it out over multiple screens, etc).

The main advantage to working with the tracking DAW in the final most stages of doing a realistic mock-up is that you can lay it all out in a way that makes visual sense. You aren’t ‘stuck’ in a single work flow either…you always have several ways to attack an issue. Think of it like the play tab in Dorico, but many times more powerful, and much easier to control with pin point precision. It’s going to have all the tools to do micro bumps and slides on the timeline down to the millisecond if you need that. If not, then you get nice configurable timeline grids to ‘snap’ events up against. You can work in meters/measures/etc., or you can shift over to a time code (both come in handy).

You can start your score playing with the plugin open and shape up your downbow with the ADSR controls, key-switches, etc…provided in your VST plugin. Go to another channel in the plugin and shape up your sustained sound, etc…

In a serious mockup, it’s not unusual at all to have two or three instances with nothing but choices for the first violinist’s long bowing options! Sometimes they’re even based on the same sample, but you might do things with the attack/sustain…or simply use a comb filter to excite some frequencies to emulate a little more bow pressure for a given note, and so forth.

Changing sounds in the tracking DAW is as simple as snipping the part with a pair of scissors and dragging it to a track pointing to the proper plugin and channel. Rather than trying to teach your expression maps to send massive slates of events to ‘drive’ it all on a single channel (I.E. Before Dorico would let us channel bounce…my Garritan GPO expression maps for Dorico were sending something like 12 CCs, plus a key-switch for EVERY NOTE it played! Same for some of the HALion stuff I was working on before we got channel bounces)…you can spread it all out, try things in real time as the score is playing.

You can tweak the actual dials and sliders in your Plugin directly…with your mouse, or even via remote control on your MIDI controller.

You can keep up with many variations as you go…muting things in and out to see which works best for your ultimate mix. If sections repeat in your piece, you might even cycle the variations so it’s a little different on each repeat.

We haven’t even gotten into expression controller data yet…but again, you can spread it all out in front of you, at whatever resolution you want visually, and take very fine and efficient control of all this ‘data’. Again, conditional editors, or scripting tricks can help you automate redundant tasks.

Please understand. I’m not suggesting that every composer should become an audio engineer. Spending too much time trying to master the process of making mock-ups…well, you’re getting less music written then!

Which brings us back full circle to Note Performer.

It’s perfect to sit down and compose a piece of music from beginning to end. It’s good enough to test ideas, collaborate, and show off your skills as a COMPOSER/ARRANGER. Dorico will spit out beautiful scores and parts for printing, or electronic displays.

If one has time, and ENJOYS the processes involved, then pick up a DAW, a nice library or two, and learn a little as you go. If not…GET HELP. You can hire people to do mock-ups (or record real performances with the real instruments)…and there are also students and hobbyists out there that’ll help you for FREE. Universities, community groups, even local music stores/studios can usually help.

At this time…IF you have already invested in some higher end sample libraries, and don’t already have a DAW, I highly recommend picking one up. Even if you go for a scaled down version like Cubase Elements or Artist. Load your fancy library in the thing, and just play with it. Load some scores you are familiar with into the DAW, and spend a little time auditioning your library’s potential.

Why? Even if you intend to work the library into Dorico…using expression maps and whatnot…having it spread out in a tracking DAW should help you get to know the library much better. It can give you many good ideas on how to lay out a plan to get the types of sounds you require at your fingertips, and teach your staves to happily bounce around among the many options at you fingertips. It can help you get a visual in your imagination to go with all those ‘numbers/values/parameters/dials/etc.’ It’ll also help you find the annoying flaws inherent in the library…so you’ll know better what to avoid or work around those flaws.

Excellent post, Brian. So - given the detailed, numerous advantages you’ve rightly outlined that DAW’s have to allow us to micro-tweak mockups, etc. and milk maximum expression, do you think it’s more realistic to try and get Dorico to adopt more of this sort of functionality - or, instead, for them to find a way to achieve it using a (currently non-existent) “superchanged” Note Performer approach, where a lot of the fussiness is taken care of automatically, but perhaps with better built-in samples “tuned” not just to classical but other styles of music, etc?’ My instincts are that it’s too much trouble to try and re-produce what I do in a DAW in Dorico as things now stand - I’d rather stick to Logic, export MIDI (and maybe sync to Dorico a bounced Logic audio track from the same session, attached to video) and use Note Performer/Dorico to check my work as I notate the score afterwards. But I COULD envision Dorico “running” with Note Performer to offer something much easier to use, with great expression, but without the intense programming currently required to achieve these results…

  • D.D.

My approach?

When time, energy, and interest allows…

I work with Dorico’s sound engine, and base most of my learning therein around the HALion 6 engine, using the content that comes with Dorico (or supplementary vstsound archives that I have the right to share). I’m always learning something new and building things that are useful for me and save some time down the road. Each project I find some awe inspiring abilities in the software, and I also run up against some rather limiting brick walls. I do believe this technology has a place, and it needs to evolve and improve. Composers/Arrangers, and students of music need and deserve environments where they can focus on the higher levels of music making, and not be bogged down with the daunting challenges of audio engineering.

For those who don’t know it…that HALion engine is really really nice! The thing that was holding it back in the past was a platform to easily release libraries for a ‘free player’. Since HALion 6, Sonic 3…the world now has a free player available, and HALion builders can easily make libraries for it that’ll work in pretty much any 64bit host on the planet.

It’s worth it to me to invest in the Dorico interpretation system, to be learning it, and participating in the communities that help shape its future. Having said that, to me, it’s not good enough yet to replace linear style tracking DAW sessions, and I think many of our old school techniques will still be around for a few years yet.

Sadly, I don’t share much of it (HALion instruments and expression maps) with others simply because while I can make scripts and instrument programs relatively quickly, it can take many hours to document how and why it works, and when/where/how to implement it. They also really NEED to be put on a scope and properly balanced so they’ll mix and blend with everything else, and I don’t have time to do that for hundreds of sounds in a ‘shareworthy’ format. So for the time being, I have to tweak it to go with each score, adding one sound at the time to my personal library, so my initial setup when starting a project is super simple…as in almost General MIDI kind of simple. I build it up as the score demands.

In short…I understand what I’ve done, but explaining it to a random user can take more time than it took to build the thing.

When I hit a brick wall, I communicate the challenges with someone on the Dorico and HALion teams, or here on the forums. User feed back is important to that team as they assess challenges, study possible solutions, and sort when and how to prioritize the resources to design it, code it up, test it, and document it.

This is how I’ve managed to form some opinions thus far. By digging in and experimenting.

From what I can see thus-far, I personally believe that anyone serious about making commercial grade mock-ups will ultimately save time and money doing that phase of the project in software that is specifically built for shaping audio. I can see Dorico eventually becoming a true one stop shop for superb mockups. It’s going to take a few years though. Some on Dorico’s side, and much, much more on the part of instrument makers.

The composition and arranging stages…for people that don’t want, nor care to fiddle with all this playback stuff…again, I think technology like Note Performer is already a good investment, and that technology will improve with time as well.

For me personally, a good stage to start with on Dorcio’s end, would be having Dorico at least be able to sync with mtc time code as either/both master/slave, and support for midiloop files. A digital ASIO bridge of some sort for getting audio from Dorico into the DAW would be nice as well (meanwhile we can keep using patch chords…real or virtual)…but at this point I’d settle for nothing more than the ability to send time code from the transport. I already use things like jack3 and virtual MIDI ports to get different apps into a common sound matrix…but we’re still missing a way to get Dorico synced with anything!

I’m not sure what industry DAWs support the midiLoop format, but in Cubase/Nuendo, it’s similar to a midi file, but with a big plus. It keeps up with the plugins that were used, and their state when you saved the file. It was meant to be a librarian feature of the Steinberg Media Library. Just click it and audition things instantly…it’s really nice, it plays/sounds exactly as it did when you saved it, using the same plugin(s)…without having to set everything up in the DAW first. These files can be as simple as one track/channel, or they can be many tracks/channels. In Cubase world, you can easily save any selected instrument tracks into one of these instantly audition-able midiloops.

Also, the main thing that prevents me from doing more score work in Cubase initially and porting that into Dorico from XML or MIDI are percussion parts. Even if Cubase beefed up the xml export considerably for Dorico’s sake, the percussion staves would still most likely have to be redone back in Dorico. If you go XML, it might look pretty close, but won’t play back anywhere close to what it should. Go the MIDI route, and it ‘might’ sound right, but look a total mess. Go two tracks…one with XML on a drum stave…and another on a piano track for play back (changed end point to your drum kit). Well, anyway we slice it, it’s a mess to sort out and clean up.

Cubase is great for anything having to do with percussion. It’s powerful, easy/fast to use. Sadly, the pro scoring applications almost never import percussion parts properly from tracking DAWs, and they also make it difficult to just import and keep a separate ‘playback track’ along side a dumb/silent visual stave. Building percussion staves is a royal pain in Dorico, Sibelius, and Finale all three! I do wish some of them would take a look at Cubase in this respect, and get some ideas…such as…there’s no reason your UI has to be so small and clunky! Sometimes a simple spread sheet/graph all laid out in front of you with fields for your kit pieces and scoring parameters is MUCH easier to work with! Pop ups and sliders, only show tiny bits of information at a time are kind of defeat the purpose of having a PC…we’re long passed the age of working with an 50X10 LCD and 6 buttons. Show the WHOLE DRUM KIT in a single spreadsheet like format and let us go in and tweak the fields from the same vantage point, more like Cubase! If we could do that, we could quickly ‘fix’ bad imports from other apps, and never have to go 'moving notes around…screwing things up even worse with our mouse, etc.

If Dorico could save and load these ‘midiloop’ files, keeping the instrument setup entact…it’d be a HUGE time saver. It’d make exporting to a DAW that supports it a breeze! A few clicks later, and you have an identical sounding setup in your DAW as you last had in Dorico!

For importing them…even if they were in a side cart of some sort that doesn’t even try to display the staves/notes…but simply plays it back it’d be super useful in terms of producing mock-up and distributing it with the score in a load and play format. Later there could be routines to optionally attempt to lift that data and transcribe it into score players/staves.

I.E. if each stave had and ‘optional static midi lane’ where we can toggle between what Dorico is generating, and play back something we’ve imported in its place. Such a lane could also be used to ‘freeze’ Dorico’s interpretation into a static version, or for recording a real time performance to replace the one we ‘see on the score’ (but only in terms of playback).

I’m thinking…a side cart that we can import things for playback that won’t be seen on the score (unless specifically asked to attempt to transcribe it onto a player/stave).

Eventually, each stave could have at least one of these optional import/export/playing lanes, and a switch to toggle between what lives on said lane, and Dorico’s own play tab elements that he uses for his ‘live generated’ interpretation.

Heck, as a ‘starting point’ I’d even settle for a lane that only takes MIDI type 0…that would allow distributing an alternate play-back ‘mix down’ along with the score itself. Have an icon somewhere that can toggle between soloing the type 0 mix, and what Dorico generates, or both mixed together. (a big plus if this lane can pull in midiLoops, with their plugins intact and ready to go).

It might even be that something like these midiLoop files can kept in a bridging side/app of some sort, that both Dorico and a Partner DAW can manipulate together. I.E. It shows up in Cubase like any other track. In Dorico, simply hit a ‘freeze’ key, and you get the translation pop up in the DAW. Edit it in the DAW, hit freeze, and it gets updated in Dorico.

All that leads to the next stage…which is a combination of simply making Dorico’s play/editing and scripting features so good that all one will ever need is to sync it to a DAW for multi-media sessions, and to continue improving the import/exporting abilities.

If we users continue to experiment, and share our sound engine experiences, then WE can contribute in helping to shape the architecture and destiny of Dorico development. The team only has so many people, but some of them are REALLY GOOD at qualitative and quantitative research. They’re listening to us, and building bridges both in theory, and in practicum towards industry needs and demands. That’s the main reason I do work with Dorico’s built in features when I can. I have some idea of the powers and limitations, and for the moment…if a deadline is involved, I compose/print in Dorico, and worry about playback in Cubase if/when it’s needed.

Brian, every time you post I learn something. Many times there are parts that only resonate in a cloud-like way given my experience, but I always get a better sense of how things fit together

Percussion remains a challenge for every notation program I have tried. I shall have to explore my copy of Cubase to give myself a better understanding for what you are lobbying to influence Dorico in that area.

At any rate, thank you for continuing to post such informative summaries of your experience.

The drum mapping elements of Cubase in particular.

For the most part, it’s just a big spread sheet that shows all 128 possible keys of a drum kit down a list (can be more than 128 actually, with kit pieces split by velocity ranges, or pointing to multiple channels/plugins). Of course can change the order of the fields, name them anything you like etc.

Across the spreadsheet are all sorts of simple values you can control such as:
Name of kit piece, the line or space you want it drawn on when using the score editor, the shape of the note-head to use, the channel it should broadcast over, which plugin to use (When importing a map from something like Groove Agent, you’re limited to a single plugin or MIDI port per map, BUT…with a simple little xml tweak to your drum map, you can force it to accept ANY plugin or MIDI port in a single drum map, so kits that use multiple plugins/ports on a single stave/track are indeed possible), and a whole lot more.

Cubase sets these maps up as a track insert…and you can instantly apply or remove it from any track/stave you like.

Combine that with the Diamond Drum Editor, and it’s pretty much MIDI in…properly mapped drum kit out!
Essentially all the diamond editor does, is treat each midi note kind of like track of its own (in a visual sense).
The diamonds are better suited for seeing what’s going on on a trap set, or percussion section.
It has modes for showing duration, but since that’s not necessary for most types of percussion instruments, the main mode is all about easily inserting and dragging things to the precise spot on the timeline you’d like it to live. You get a lot of assortments for coloring things…as in, color by velocity, by instrument, and so on.

One can literally build an entire drum kit or percussion section, and start throwing in rudiments (sticking and velocities considered) in a matter of minutes. Just plug in the bits as you need them…no matter what plugin, preset, channel, or individual note/key the instrument lives in.

Again, those logical editors are a big time saver when dealing with the subtle velocity inspired chinks and pings of overhead cymbals and the like. It’s the sort of thing that can drive one batty trying to do one note at the time, through an entire through composed piece.

I don’t really expect a notation program to have a diamond editor and all, but what I’d love to see carried out is how easy it is to design/tweak a drum kit, assign it to a stave, set up the note shapes, and control the end point for plugin/midi of each individual kit piece.

The little dialog with a few pops ups and scroll bars is truly annoying. I could say the similar things about the expression map, technique, and glyph editors as well (put more options in the same window…spread it out more…currently its as annoying as those trolling web sites that make you click next page for 30 minutes just to get at less than a paragraph of actual information), but that’s for another time and thread.

Many thanks.

This comment invites another side discussion about whether this is not perhaps taking things a bit too far. There is music, nowadays in particular, where the actual sound is more important than the music. Indeed we could start on how do you define sound v music but I’ve been there and don’t intend to revisit! I regard a “serious” mockup as no more than one that does enough justice to the music to give full rein to the feelings contained therein. This does not have to mean messing around with every note. What I’d love to hear is an audio clip of a mockup which shows the difference these wonderful DAW features can make as opposed to something only processed in Dorico. Any offers?

What you are really saying, of course, is that enormous sample libraries are a dead end.

Human violinists don’t have a bag of 20 or 30 different playing techniques and “switch” from one to another. Everything is a continuum, except for a very few either/or options like arco and pizz.

Of course violinists and their teachers have invented a lot of different names for points along that continuum, but isolating them into separate sample sets or key switches doesn’t correspond to the way humans play.

What the world needs is fewer samples and more (and better) modelled instruments IMO. Don’t lose sight of the fact that all this tinkering with huge sample libraries is just a work round :slight_smile: