Hi Ben (I’m guessing that’s your name). Is there any way we could work for just 5 minutes over Team Viewer to fix my issue with dynamics not working for legato? I don’t know if you read the latest, but it’s not working again. Since your setup sounds very similar to mine, it shouldn’t take long…
Ok, so it’s sort of working, but not correctly. For the legato articulation for cello, I set it to use xfade instead of CC1 and it’s working to change dynamics, but why? Why would CC1 not be working as advertised?
I don’t know anything about Studio One, I’m afraid… I did do a quick test in Logic, and CC1 worked there. Using Xfade switches the dynamics to Note Velocity, instead of CC1, as you may know.
Have you set the Volume Range over to the right?
No, let me clarify. The problem is which setting to use in the SINE player interface. Yes, I tried the volume slider and it makes no difference in this case.
Found this in OT documentation which led me to try the xFade option which I discovered works.
“Most collections have true legato (interval sampling) for all sections and solo instruments. To achieve the highest level of realism and detail, we have recorded every interval up to an octave. This means you will hear the natural transitions from one note to another. Every true legato patch is sampled with one (Soloists) or several dynamic crossfade layers. Capsule uses an Adaptive Legato technology. While you play, the instrument analyses your playing style and adjusts the transitions automatically.”
I have to say I’m still struggling with understanding the ‘special Legato’. However, there’s this bit:
If you’re using Note Velocity for dynamics, then can you crescendo through a held note?
No, I think you are missing the point. The way they recorded them I think you have to use the X fade option.
Crossfading:
Imagine a violin. It has different characteristics when adding pressure and playing loudly than it does when playing softer.
So, for each key, a library might have several different ‘velocity layers’ for each sample zone.
For strings it’s pretty common to have at least 2 layers (p and mf), but some can have 4 to 8, or even more!
As the instrument plays louder (strike the key harder in velocity mode, or send higher expression volume CCs if in that mode), it plays different layers than it does when played softer.
If cross fading is enabled, as the zone layers change, one is phased out as the other is phased in. For a string instrument, it probably wouldn’t make much difference in what you hear unless:
A. you were playing rapidly on the SAME NOTE but at a lot of different dynamics.
or
B. You were doing a lot of expressive crescendos/derescendos and the instrument were set up in a particular way. Typically they’d just alter the volume for the last sample triggered and sustaining, but in ‘some cases’ an instrument might trigger all the layers every time you note-on (but only the needed one at an audible layer) and might try to change layers while notes are sustaining. It’s pretty unusual for this on things like strings, winds, etc.
If crossfade on layer change is disabled, then the layers simply change without any crossfading effect. Still, even with crossfade on, consider that it might not change the way it sounds for ‘sustained instruments’, or it ‘might’. I all depends on how the instrument was designed. Multi-layered instruments that use crossfade on layers tend to use a lot of voices (though some samplers have ways around this too), so it’s probably not going to be implemented in your run of the mill orchestral instruments.
Which is best? Experiment…sometimes it also depends if a given instrument has ‘scripting’ and such internally that might ignore these settings anyway, and do handle these layer changes on its own.
Next consider…some instruments can configured to independently have control over the sample layer and attack envelope using velocity, while volume itself is done via CC.
I.E. With Garritan GPO4 strings (GPO5’s new orchestral strings are different). I can leave the mod wheel alone, and notice attack differences based on how hard or soft I tap keys, yet the ‘volume’ stays the same. In this case, opcodes are manipulating the attack time (and sometimes where a sample begins playing back) in the dynamics envelope of the instrument. If I strike a key really hard…the attack time is FAST, and it has a more aggressive punch to it; however, there is not much if any change in the ‘volume’.
Now I can start moving the mod wheel and tapping keys. Moving the mod wheel causes the instrument to actually get louder/softer. For instruments that have multiple layers/zones for different dynamic layers, moving the modwheel will also change these layers. When my wheel is pushed way up, it plays the ff recorded sample. When way down, it plays the pp recorded layer.
It can get even more complicated where legato comes into play. With some instruments, you can introduce CC68 (legato pedal), or CC64 (sustain) to force a special mode that crossfades subsequently played notes and optionally introduces ‘portamento’ effects. It’s been a while since I’ve tried to use these things in Dorico, but last time I did, Dorico wouldn’t implement them properly. In dorico, I just allow Dorico to handle legato his own way (The general MIDI way) by overlapping the notes a bit. I do NOT enter anything in the expression maps for legato at all (or use manual kludges). It may be that by now improvements have been made to the way expression maps handle legato, but upon my last visit, no matter what I tried short of programming my own instruments with special scripts, I could only make it go:
la la la ta (four notes slurred together)
instead of:
ta la la la
In my DAW however, I use the legato/portamento switches a good bit.
I hope this helps conceptualize things a bit more clearly.
Thanks Brian. Yes, after doing some heavy reading in the following books, it’s all starting to make sense…
- Modern MIDI by Sam McGuire (a little dated, but good so far).
- The Guide to MIDI Orchestration by Paul Gilreath (I bought it knowing there would be substantially dated material towards the end of the second half of the book, but still some really good information especially on orchestration as it relates to MIDI.
- Music Production with Cubase 10 and 10.5 by Pierluigi Bontempi.
- I’ve also bought Darren Jone’s book on Cubase 11 (yes, I’m fully committed to learning Cubase now).
- I’m also returning the much older book Cubase Power (didn’t research it enough to realize it was super old, like Cubase 5 I think), which I bought on amazon.com.
If anyone is interested, the book, “Form in Music” by Wallace Berry is particularly impressive to me, being a music major. It has nothing to do with MIDI or technology, but is the best book I’ve ever read on form in music in general. Straight and to the point, but covers all the details.
One quirk I’ve noticed with exporting MIDI from Dorico (3.5 Pro) to Cubase (11 Pro) is that the tempo doesn’t export too. Also, setting say q=90 in Dorico, and the bpm in Cubase to 90 results in Cubase playing back much slower than in Dorico. More than likely my own ignorance of Cubase/Dorico. I know there is something about Tempo Maps somewhere, so ought to have a look. In general I like composing the basic idea in Dorico, then VST tweaking, expression maps, FX and so forth in Cubase (especially with the Eric Whitacre Choir which has a huge constellation of articulations/techniques). Using Dorico I can then produce a really nice score for the resulting masterpiece
Tempo in Dorico is certainly exported when exporting MIDI files, so I would suggest you check a couple of things: are you using fixed tempo mode in Dorico, perhaps, so you’re not hearing the written tempo? Or have you perhaps got a sample rate mismatch between the two applications (though typically the difference between 44.1kHz and 48kHz, say, won’t cause a huge change in tempo, but a noticeable one nonetheless).
Thanks Daniel. When I get home I’ll follow your advice with some experimental exports. I’d never even thought about sample rates.
I use Cubase and Dorico together at the same time when composing.
I have the silent playback template selected in Dorico while I have both programs going, so all the sound is coming from Cubase.
I’m a piano player and the way I compose is mostly initially improvised piano and vocal ideas and find Cubase is far better for that kind of scenario. I will use Dorico to refine ideas as I am going e.g. to harmonise a particular passage, write a countermelody, whatever… I find having the visual clarity of having something correctly notated to analyse and work off is an important part of the process.
Most of the time I find it faster to manually transcribe the parts directly into Dorico, although I definitely use XML transfer a decent chunk of the time. Some of Cubase 11’s new features in the score editor make the XML transfer a bit easier but i still find it clunky so I’m still more inclined to manually enter into Dorico.
Most of the music I write is destined to be played by live musicians, who are reading most of the time and it’s a bit of a time saver having a back and forth with Dorico while writing as most of the music ideas will already get notated in the process, ready to be refined into a readable score. At the same time the played-in Cubase track is developing with a more live feel, as I play all the parts in, rather than notate them. Often i will refine something in Dorico and play it back into Cubase.
So the upshot is, working in this fashion, you get a reasonable sounding audio representation of the track that musicians can practice along to + many of the ideas are prepped in Dorico, ready to be refined.
Anyway. That’s how i find the two useful together.
I must give that a try, thanks for the tip.