Feature request: Articulation latency correction

There are some sample libraries where different articulations have different amounts of attack transient latency in an inconsistent manner. This is a necessary evil of keeping the entire attack transient for certain sampled instruments. Cinematic Studio Strings is a prime example. different articulations within the same instrument can range from 0ms to up to even 330ms!!

Many sample libraries have this issue. See the following forum post and google doc from vi-control where contributors are providing various values for different articulations in various libraries:

https://vi-control.net/community/threads/negative-track-delay-database-spreadsheet.105332/

typically people use negative track delay and put each articulation a separate track when dealing with this issue in DAW’s…that’s one approach.

My feature request would be to be able to provide in the expression map for each articulation a known latency value, and have Dorico use that for the start time of each note according to the specified latency value for that articulation.

Further details, the start time of the note needs to be earlier in order to compensate for slow or late attack, but the note end time should NOT be made any earlier…its just the start time of the note.

This is an issue that a lot of people using DAW’s for mockup are spending a lot of time nudging notes forward manually, which is a PITA… and/or using negative track delays when articulations are isolated to their own tracks, etc… but Dorico could easily handle this particular problem since its already possible to humanize the start time of notes. Please add a factor for articulation-by-articulation “starting early”.

4 Likes

Yes, Paul intends to add a per-switch offset value to expression maps in a future version, though I can’t say exactly when this will be done.

1 Like

Thanks for getting back to me. Just to be clear, I wasn’t meaning “per-switch” offset… If I understand you correctly, that would be in order to set an offset between each switch and the note it is switching.

What I am asking for is for per-articulation: the entire “articulation”, with all of its switches and/or channelizing behavior…to have a single offset value for the sounding-noteOn itself, which would mean all the switches also would go earlier along with it.

And as I said above, the matching NoteOff specifically needs to NOT be offset. just the NoteOn.

You do misunderstand me. In Dorico’s expression maps, each “technique” is typically handled by way of a switch, i.e. in the Base and Add-on Switches section of the Expression Maps dialog. That is the granularity at which we expect to make it possible to provide an offset for the start of each note.

2 Likes

It doesn’t sound like you are understanding me and I can’t tell for sure if I am understanding you but what I am speaking about has NOTHING to do with the actual action switches. It has to do with the actual sounding, musical notes which need to have negative delay applied to the actual musical NoteOn events, regardless of whether there are one or more (or none) switches applicable, on a per-articulation granularity (which I guess Dorico labels as “playing technique combo”. It doesn’t sound to me like you are talking about the same thing and perhaps you may or may not understand the issue I have tried to explain. EDIT - see below later posts with clarification about the word “switch” being used for two different things.

I do understand the issue. You want to be able to tell Dorico to automatically play, say, legato notes a hair earlier than staccato notes within the same patch. These different sounds are triggered by different switches in the expression map. When a particular switch is active, the notes should thus be offset by a specific amount, as specified in the expression map, on a per-switch-type basis.

no sir. when a different playing technique is active, different latency may apply. Different playing techniques may or may not involve actual switches and if they do involve switches there may be one switch or could require a combination of several switches in order to enable and engage the one playing technique. Switches do not have a 1:1 correspondence to playing techniques.

and if someone is using channelizing to direct notes to articulation sounds…with no actual switches at all…the same issues applies…each playing technique…or rather each separate sampled sound in the instrument…regardless of how you get to it…with switches, without switches, with one switch or several switches…that sound may have a different amount of latency then others…

you are saying when a particular “switch is active” but that statement doesn’t make sense to me. I say when a particular articulation, or particular playing technique is active…regardless of the switches or non-switches used to get there.

while we’re on this topic, its also important to note that some sample libraries have different amounts of latency, not only per-articulation (or per playing-technique), but also different amounts of latency within different velocity levels of some articulations. Cinematic Studio Strings is a Good example, where the legatos have different latency depending on the velocity used ( the velocity determines how fast of a legato transition to use).

so… actually…the granularity of of one latency per articulation may not always be enough, unless you provide, for example, a condition for velocity so that there is a row in the list for each for the velocity ranges of a particular playing technique, each one having its own latency specified.

If you combine playing techniques to trigger an articulation, you’ll still have one switch.

No you won’t. You will have x number of switches that are defined in the actions list of that “playing technique combo”. Maybe you guys should start defining what you mean by the word “switch”? Verb or noun?

I have always seen it used to refer to key-switch, cc-switch, pc-switch, etc. Noun? They are designated midi events used by instrument plugins to change their sounds from one articulation to the next. But sometimes a combination of several switches are required, there is absolutely no rule that says a single midi event switch will change an instrument to each sound available. Vsl, for example, uses x and y dimensions in their older VI player to isolate each articulation patch, their newer synchron player uses even more then two dimensions, etc… which needs even more then two midi event switches to change the player to the desired sound

I believe that Daniel is referring to switches that affect Dorico playback and you are referring to switches that affect your VST options. Perhaps that is the misunderstanding.

If I am wrong, no doubt Daniel will be able to clarify. I think you have made your concerns clear already.

@ Derrek - Thanks for clarifying that…

The switches I have been referring to are configured here:

I think probably Daniel may have been referring to the following area as switches:

Perhaps that is part of the Dorico vocab, and if so sorry for not understanding you Daniel! I didn’t realize each line in the the left hand list is also considered a “switch”.

In any case back to the discussion at hand, yes…that is what is needed is latency lookahead per “Dorico switch”. With the caveat that as I mentioned earlier, there is at least one very popular sample library out there (cinematic Studio strings, also cinematic brass and cinematic woodwinds) which uses velocity not for dynamics, but for determining legato speed, so in that case, the latency varies within the one playback technique combo (one dorico switch) depending on the note velocity. I guess Dorico would need to be enhanced to allow note velocity as a condition in the conditions list…in order to isolate a different “dorico switch” for each velocity level and apply a different latency adjustment.

Just saying, but next time you might start by asking yourself if you understood before assuming that the guy who heads the program have no clue what he’s talking about.

not beat a dead horse, but I am just trying to make sure that the communication on this feature request has been accurate. What you are suggesting goes both ways. I stated very clearly the distinction about “switches” several times until we got to the bottom of that communication. I would also like to suggest the Dorico is using confusing language by conflating the term “switch” for several different things. Can we please move on or do you wish to continue slapping my wrists for trying to communicate clearly?

this is a VERY IMPORTANT feature request and I hope the Dorico Team will take it very seriously. Take a look on the various forums where people are dealing with this particular issue and you will understand why its important to get right the first time.

And by the way, you are inferring that I think Daniel has no clue what he is talking about…that is your opinion, not mine,. please don’t put words in my mouth, I never said that. sheesh.

I too hope that per articulation latency in ms will get implemented at some point. Also a way to automatically detect the first note of a legato phrase would be necessary for correct adjustments.

For time being, are you aware of this script for Cinematic Studio -libraries?

http://alexjevincent.co.uk/css-control-panel/

I have been using my own, modified version of that script with Dorico and it has worked fairly well. I recommend keeping your expression maps as simple as possible and changing velocities for legato speeds by hand. I tried using note length conditions for automatically selecting correct legato speed, but that caused problems with CC1 resetting mid-phrase to 64.

1 Like

One way to handle first note for legato is to use a different playback technique for the first note than for the secondary notes. That can be cumbersome to do though so I agree that would be a good thing for articulation management systems to isolate the first note from secondary notes in a legato phrase somehow and allow us to isolate a separate sound slot ( cough; er “dorico switch” I mean) in the expressionmap to configure as we desire, including different latency correction.

The reason that the expressionmap needs to be more aware of the velocity-as-legato-speed is not to actually have it provide the velocity, it is to provide the right amount of latency adjustment based on user-provided velocity. As I said above, the expressionmap would need to SEE the velocity that you program into each note; as a condition and use that to determine which exact “dorico switch” is active and which latency to account for.

Though with dorico automatic playback is still a goal I imagine so another interesting thing would be to have dorico use playback technique combo to identify different types of legato and then isolate different “dorico switch” for each one, which in some libraries might be different key switches but in some libraries such as css it would need to effect the note velocity, so that would need to be another possible thing allowable in the ACTION section; to set the velocity of the sounding note. That is an interesting idea but outside the scope of this request for latency correction.

I am aware of the kontakt Ksp script for css which is also an interesting solution targeted at that one specific Library but i understand a lot of people having some issues with it and I’m not confident it’s handling noteoff’s correctly among other things. It’s why you have had to edit it yourself and underscores the need for articulation management systems to address this issue directly with out the need for hacked on scripts…

It very well could work fine for css if setup right. I am proposing a more general solution that should help most libraries including css. Vsl also has libraries where velocity can be used to influence legato, etc

On the specific point of CSS which is the worst offender by far of the libraries I’m familiar with, an update will include a low latency mode which should reduce or eliminate the issues there though I hope it’s not too much at the expense of legato transition quality. The KSP script is mostly OK but can trip over itself in fast music on occasion, causing a line to get stuck or an articulation change to be mistimed or indeed missed altogether.

The request in general is something which has been taken onboard and Daniel’s use of the term “switch” is perfectly correct and understandable by me within the Dorico terminology, even if it might (perhaps not unreasonably!) confuse some. And yes, with legato in CSS for instance, the different velocities having different delays is an important concept for Paul to consider when deciding how to implement the offset policy.

CSS’s alleged low latency mode, would mainly be for when you are actually recording your parts in. The intention would be that during playback you’d want to turn that off in order to get the full lovely quality of the library to shine through.

These articulation latencies exist for valid reasons…it is not simply because of poor programming. So yes, you will lose some of the beautiful sound of that library when you turn on low-latency mode.

I have found quite noticeable latency issues with Kirk Hunter’s libraries also and not consistent either. It doesn’t get talked about a lot because truthfully hardly anyone seems to use KH anymore…CSS is the flavor of the day. I asked KH himself about this once and he explained to me that this is necessary to fully capture the full string attack transient. Its possible to compress or truncate part of the attack to avoid this latency, which is what you might find in, say, a PCM based synth with string sounds that seem to have no latency. But the fact is those samples have had part of the attack compromised in order to eliminate the latency. If you want fully realistic string sounds…you need to have the full string attack present…and it will have latency…and the amount of latency will depend a lot on the specific articulation being played. It is a nature of the actual physical instrument.

Anyway, if you want to track the ongoing discussion bout which libraries have which latencies and various complications…here is the ongoing thread that will continue to develop over time:

https://vi-control.net/community/threads/negative-track-delay-database-spreadsheet.105332/

It also links to a googlesheets spreadsheet where they are keeping a list of libraries and known articulation latencies…not that everyone will agree on what those values should be…it has to be determined in many cases by ear.

As a side note, the one thing that sample library developers COULD do when they develop these kinds of libraries, is to intentionally program the shorter articulations…so that they have longer latency also which matches the longest latency articulation in each instrument. That way, for example, something like CSS would always have a consistent latency of 330ms for every articulation. Some of them would be almost impossible to play though, you’d find yourself hitting the key and waiting 300ms to hear sound for some of the articulations, which would be really annoying and probably cost sales.

So there isn’t really an easy technical fix for this.

Another thing I’d love to see is an improvement in VST3 or VST4, etc. so that the amount of latency per articulation can be communicated from the plugin back to the DAW in some kind of standardized way so that the DAW can just handle it as easily as it handles PluginDelayCompensation. But we are a long ways away from ever seeing that happen…probably never at the rate things are going…