I thought it would be helpful to have a discussion about Latency. I am a little confused about it but I assume there are others don’t really understand it either. Perhaps some experts can help out.
FIRST: Latency associated with the buffer size.
Assuming 44,100 samples per second that means 44.1 samples per millisecond. Assume a buffer size of 256 samples. That means the buffer adds about 5.8 milliseconds to the Latency. Am I right about this?
SECOND: Deferred Procedure Calls (DPC) Latency.
This is the average time that it takes the processor to “get around to” performing an instruction or set of instructions requested by the application. Of course, the audio going into or coming out of the audio device must continue at a constant rate with no interruptions (or clicks and dropouts occur). When recording audio, the processor writes data to the disk from the buffer. When playing, the processor writes data to the buffer for the audio device to convert to sound in the speakers. If the processor gets too busy doing other things then the buffer may run out (during play) or overflow (during recording audio in). Am I correct so far?
On the Devices Setup panel, where you can select the ASIO driver, it displays the input and output latency in milliseconds. This number seems to be very close to the Latency associated with the buffer size as I have calculated above. WHY IS IS NOT EXACT? Is there some DPC Latency figured in there as well? Does anyone know how this number is calculated? Is it the result of an actual test of latency performed by the system after changing the buffer size or the driver?
I hope to learn something myself here and hopefully help some others in the process.
Well, I don’t know the answer to your question, but I think you’re not supposed to worry about the DPC latency value unless it spikes, indicating a problem with device driver software. After all, DPC latency is measured in microseconds, not milliseconds. It’s normally not going to be contributing to the overall. I’m not sure we need to know any more than that.
OK. Sounds logical to me. When you say “unless it spikes” do you mean the DPC latency (as measured by one of the software checker utilities) or do you mean the ASIO performance meter?
That brings another question to mind.
Does anyone know exactly what the ASIO performance meter measures?
I will tell you what I think makes sense…(here I go!)
I think it measures how FULL the input buffer is and/or how EMPTY the output buffer is. Assume we are recording for now. If the processor starts to fall behind in writing the audio from the buffer to the disk, then the buffer will start to fill up (and the performance meter goes up). If the meter gets all the way to the right, the ASIO buffer overflows and audio data is lost resulting in a dropout. For playback, we have to think in reverse. The processor writes data to the buffer for the sound card to play. If the processor falls behind, the buffer becomes empty and the sound card has no data to convert to sound resulting in crackles.
Of course crackles during playback is not a huge problem, just raise the buffer size and try again. Dropouts during recording means audio data permanently lost (probably during the best pass of your guitar solo!).
Of course, we don’t need to really know the answers to these questions to use Cubase but I am curious and thought it might make interesting discussion.
I would think the asio meter shows output buffer status only as this would be mostly where any problem would occur.
As the output of the buffer needs to stream at a constant rate and the input to the output buffer is filled in bursts.
Whereas the input buffer is filled at a constant rate and any writing to disk can be cached it would be far less likely to fail.