ASIO Latency Compensation Problems

No i disagree because its the other way round. If the midi recording would be delayed by the input latency to match the audio that would mean that both audio and midi are too late. the input latency is a DELAY. That means the audio is recorded later than you played it by the input latency of the ASIO driver. Midi is not. IThe Audio now has to be moved back to its intended earlier position and not the midi has to catch up with the allready too late audio. than both would be too late.

Assuming that this is a new project recording only simultaneous audio and MIDI for the first time. Will the audio and MIDI appear in the recording at the same time?

Pretty much. Midi is not always precise. For instance when i use midi over WindowMidi ports rather than DirctMusic the notes are too late by about 10ms. With DirectMusic the Notes are pretty much in time with a small tendency to be upfront by an average of 0.5ms
this is rather a problem with the midi driver implementation in the OS than a cubase issue. And i consider a difference of 6ms pretty much in time when you think of the musical aspect of it.

It sounds like you may want to upgrade to Windows 10! But. that’s kind-of beside the point. Since you agree (and have demonstrated) that MIDI recording in the proposed circumstance is delayed by the reported input latency of the audio interface, should the triggering of the associated VSTi also be delayed by the reported input latency of the audio interface? :wink:

Don’t conclude something or lay words in my mouth i never said. Where did i show or said that? don’t orchestrate things just to fit your therories. :unamused:

So you think if both signals arrive at the same time but the correspondig audio of the midi trigger is earlier by the input latency in the monitoring the instrumentt has to be delayed by the input latency to match the monitoring of the audiorecording?.

Anyhow think what you want. i don’t want to have my words twisted. You have been wrong before because you didn’t understand how things work and you are wrong here as well. farewell then because this is getting tedious

Tedious is an understatement! :exclamation: Before you “go”, could you please answer my question about your relationship with Steinberg/Yamaha (assuming you’re allowed to do so)? :question:

In his March 16th post on this thread https://www.steinberg.net/forums/viewtopic.php?f=198&t=111073#p626096 Novikthewise provided test results that nicely demonstrate both the problem that is the subject of this thread and VSTi triggering jitter problem documented at https://www.steinberg.net/forums/viewtopic.php?f=198&t=115186. His recording with “ASIO Latency Compensation” enabled demonstrated that the VSTi was triggered 49.46ms after the reception and recording of the triggering MIDI by Cubase. That delay between recorded tracks 9 and 12 is 1ms more than the 48.46ms reported output latency of his interface (the delay between tracks 9 and 11).

His recording with “ASIO Latency Compensation” disabled demonstrated that the VSTi was triggered 70.82ms after the reception and recording of the triggering MIDI by Cubase. That delay between recorded tracks 3 and 6 is 22.36ms more than the 48.46ms reported output latency of his interface (the delay between tracks 9 and 11). Had he done more recordings, he likely would have seen some track 3- 6 delays as high 48.34ms (the apparent reported input latency of his audio interface). :wink:

The lack of knowledge about how latency compensation has to work and how the asio latency compensation on midi works lets Amack conduct test setups that are inapropriate to show the problem he claims exist. In fact in both his and my test he is only right about the jitter of the live audio on instruments wich is not subject of this thread but perfectly shows the Asio latency compensation on midi/Instruments works exaclty the way it should by moving the recorded note by the output latency of the asio driver wich i nicely demonstrated in this post why this has to happen and that it is done correctly
https://www.steinberg.net/forums/viewtopic.php?t=111073#p626103

Amack allready proved to not understand the very nature of how Cubase works nicely in this thread
https://www.steinberg.net/forums/viewtopic.php?f=198&t=111022
He weren’t just wrong about this he also accused me of making things up.

I really hope this thread gets closed allready because the test amack conducted here is complete nonsense to proof his claim.

There appears to be many (continuing?) Cubase users interested in the end result of this discussion (maybe some would even be willing to so state/post). :question: Closing this thread would prevent me from posting the results of the feedback I receive from the software developers. Chris at Steinberg Support told me last Friday (March 7th) that he would inform the software developers of this and the other reported (jitter) problem you mentioned (and apparently now acknowledge). :slight_smile:

There really aren’t that many people contributing to your thread. That’s just a lot of people reading and eating popcorn

Novikthewise. You were quite successful in demonstrating why system developers don’t put people in the feedback loop when analyzing system performance. But, you weren’t very successful in demonstrating the point you were trying to make. See my markups and images imbedded in red in the following from your March 16 post https://www.steinberg.net/forums/viewtopic.php?f=198&t=111073#p626103 :wink:


what is your point?.
You don’t even understand your own test results because you created an error in your test you now base your judgment on. You still don’t understand that Cubase doesn’t compensate for Live Monitored tracks what you did in your test. No wonder the result is wrong. Could you please make a picture of your very first post and manually change the positions to those you say would be correct? than i will explain to you why the looped back audio of the instrument will be earlier than that of the first audio track.

IMO it would be better for me to step through my reasoning to enable a determination of points of agreement and disagreement. Any points of disagreement can then be stepped through in a similar manner. Since we’re discussing the operation of an essentially deterministic system, we should ultimately be able to reach agreement. Does that seem reasonable?

Here’s the 1st two steps in my reasoning. Please let me know if you agree or disagree on each of them:

The assumptions are for recordings in new projects with no plugins except VSTi on instrument tracks. Timing claims are based on the time events are recorded to tracks in Cubase.

  1. MIDI note recording will be delayed by the sum of the reported input latency of the audio interface, the MIDI player’s delay, the MIDI source (keyboard/controller,etc.)+communications delay, and Cubase’s MIDI note detection+recognition+recording delay. Do you agree or disagree? :question:

  2. Audio recording will be delayed by the sum of the audio source delay, the input latency of the audio interface, and Cubase’s audio recording delay. Do you agree or disagree? :question:

To 1.
I partly disagree. Midi recording is not delayed by the input latency of the Asio Driver. I gets delayed or imprecise due to midi communication latency and wrong timestamps. I am pretty sure the cubase recognition of midi is neglectable
I am not exactly sure what you mean by players delay.

To 2. Agreed if you mean with Cubase Audio recording delay the plugins that might be inserted in the inputs


To your Test: Lets have a look what initially happens before cubase compensates all recorded events. Like you said earlier if i had recorded enough tracks the instruments looped back recording would be earlier by the input latency than the looped back microphone recording.
The reason why this happens is that the Audio of the microphone had to go through the input of the interface and was delayed by the input latency. the midi note that had been recorded was record immediately and therefore earlier recorded than the audio by the input latency of the interface. Also the Instrument was triggered immediately but the audio of that instrument is than delayed by the output latency and not also by the input latency like the microphones audio was.
The loop back recorded instrument audio lacks one input stage why it is earlier recorded than the microphone loop back by the input latency.
Now Cubase moved both loop back recorded audio events by a full RTL. It moves the midi recording by the output latency
If the instruments Audio supposed to be in sync with the microphones loop back recording the audiotrack with the Instruments audio would have needed to be compensated by only the output latency…

Cubase can not know that the audio it recorded on the instruments audio track was actually the Instruments audio it triggert and couldn’t have known that it lacks one input stage. Thats what i meant with using cubase the wrong way. You expected Cubase to do something it couldn’t have known to compensate correctly

I think your problem in your thoughts is that you look at it completey from a technical perspective and leave the musician out of it
What you actually wanted to happen is that the instruments triggering is delayed by the input latency aswell so the the monitoring of the microphone and instrument is in sync. Purely technical speaking this makes sense.
But this is not the way to work as this makes no sense for the musicians and is undesireable . Its generally not a good idea to use the software monitoring signal as the timing reference. Especially not at high latencies.

The correct way to work is to minimise latency alltoghether by using hardware monitoring on Audio and lowest possible latencies on software monitoring for Instruments and as a timing reference the Click in Cubase. Even if you don’t use the Click its mendatory that the musicians that play together have near zero latency monitoring to actually be able to play together. I quess you can imagine that musician who hear themselfs 100 ms later after they played will not be able to perfom well.

So if you play to the click with near zero latency for all musicians than compensation on audio by a whole roundtrip and compensation of the recorded midi notes by the output latency will ultimatly sync midi and audio. If you than playback the project midi and audio will only be delayed by the output and in sync

Here’s a more direct approach. Surely someone there has an oscilloscope (scope). Connect a microphone to one scope channel and trigger the scope on that channel. Set Cubase’s audio interface to its highest buffer size. Put a low/no latency VSTi on a Cubase track and route its output to one of the interface’s audio outputs. Connect that interface audio output to another scope channel. Record the time delays between the two scope channels for multiple trials of the microphone tapping a key on the MIDI keyboard/controller that triggers the VSTi. Is the minimum delay approximately the output latency of the interface (which may differ from the reported latency)? :question: Is the maximum delay approximately the sum of that output latency and the interface’s reported input latency? :question: What should it be? :question:

March 12 Update:
But, as I pointed out in my March 22nd post https://www.steinberg.net/forums/viewtopic.php?f=198&t=111073, any DAW can function as a highly capable oscilloscope.

The audio interface used determines that scope’s number of channels as well as the resolution, distortion, and bandwidth of those channels. It’s lower frequency limit is likely ~10Hz and its upper frequency limit is ~1/2 the sample rate of the audio interface. Although I can’t figure out how to trigger DAWs like a regular scope, it’s easy to scroll thru very long recordings to find the intended trigger event.

Cubase has a nice (seemingly unique!) feature that allows the recording of its output channels, so the loopbacks that apparently cause some considerable angst aren’t necessary (if the interface’s reported RTL is correct). Although it should be clear that the use of a second DAW (for use as an oscilloscope) isn’t necessary to vet my claims, one could be used if you’re more comfortable with that. Audacity (which is free) could also be used - likely on the same computer as the DAW. Audacity also provides quite capable signal analysis and manipulation capabilities. Try it - you’ll like it! :slight_smile:

I am sorry but hereby I will decline and close this thread as a bug report. It shows a non existent bug and the use that differs of what the software has been originally intended and designed for.