Would someone else try the CEntrance latency checker?

I have run into something that kind of baffles me. Example being the latency indicated by my Asio window in C6 for my interface. If I set this driver (Firewire/Mackie) to 256 samples, I show 5.33 ms latency from the C6 software window. (Well, maybe this is the Mackie window superimposed on the C6 window - not clear.) Anyway, when I load this (google it) CEntrance latency checker (used by Tom’s site exclusively), I end up with 1048 samples at 30.88 ms.

OK, two things about this number? Can you break this number down to 256/5.33? Is it simple math? I have no frickin’ idea, sorry.

Well, I would really appreciate it if someone else took their interface and C6 and ran the test. If you have done this already, great, please share your results. I am trying to make sense of what the ‘factory’ window is telling me. Thanks!!

The “factory window” (If you mean the Cubase window) tells you, that a buffer size of 256 samples at 48 kHz samplerate gives you ((1/48000)*256)=0,00533s theoretical latency caused only by the buffer size. This only considers one way, without latency of the actual A/D converters, or safety buffers, depending on connection. The latency checker gives you roundtrip latency with several more factors involved.

Ah, ‘theoretical’ latency makes perfect sense. Really, when I think about what I am hearing, 30 ms makes sense, too.

My last ponderment is why have a window that shows such inaccuracies? It would be like have an average MPG display in your car that bumped the numbers up, what 600%?

Well, this explains many things including my ping number when using the Ext Efx Bus. Thanks for the explanation thinkingcap. And basically, if 64 bit can cut this ms number in half - as they say it can - then possibly the very best I can do on my system is 15 ms. Which makes me wonder what an honest measurment on an i7 core running 64 bit could do? (I read that 13 ms is about the average best, BTW. Anybody here claim better?)

You shouldn’t need CEntrance to calculate Round Trip Latency with Cubase as it does that for you already. At least in my case, using the FF400 here, CEntrance and Cubase report the same RTL. Simply add the Input and Output latency figures shown in Cubase to get to the correct number (which includes hidden buffers and converter latency). That’s all you need.

If you still want to confirm what Cubase reports with CEntrance, then make the necessary loopback connections, set the same buffers size and sample rate as what you had in Cubase and run the test. If you need to learn more about latency, CEntrance made a PDF file with some information regarding this topic here:

http://www.centrance.com/about/tr/Latency.pdf


HTH

That depends on what type of connection your interfaces uses. I think 13 ms is the lowest average for USB 2 interfaces. But PCI, PCIe and some FW interfaces (i.e. RME) can do much lower. My FF400 can do ~5 ms of RTL (from Input to Output) when set to 48 samples @ 44.1KHz. This is also confirmed in Cubase, which reports the same figure as CEntrance. A PCI or PCIe interface will have a slightly lower latency than this.

BTW, these are actual figures, not approximates.

Jose - OK, I read what you have written. I show at 256 sample rate, 5.33 input and 5.33 output on my C6 window. So that should be a total of 10.66 ms delay at 256 - correct?

But here’s the rub and what I do not understand with your statements. You say the CEntrance test duplicates your Cubase readings - and yet when I do the CEntrance test, the ‘ASIO Driver Parameters’ section of the test shows my Buffer Size/Latency as ‘256 samples (5.33ms)’ and the window below that that shows the ‘Measurement results’ reads: 1412 samples/29.42 ms. ??? Understand my confusion? To me it seems like the test has reset my sample rate. What am I doing wrong?

I see. Even Cubase is not reading the correct latency for your interface since there’s no way both Input and Output can be exactly the same. Another way of putting it is that perhaps your interface’s drivers don’t report the correct latency to Cubase. The numbers you see in Cubase are strictly the Input buffer and the Output buffer:

Input 256 samples / 48KHz = 5.33 ms

Ouput 256 samples / 48KHz = 5.33 ms

The above is missing the AD/DA latency, usually ~1ms each way, and the safety buffers (rarely reported by interface manufacturers). Unfortunately, assuming you did everything correctly, it seems that CEntrance is reporting your true RTL. The numbers are kinda high though, so I would double and triple check that you are not missing anything as far as routing and settings. Could you explain how you are patching the loop cable? Is the cable in working condition? Is the Sample Rate in CEntrance the same as in Cubase? etc.

That’s not correct.

Your calculations leave out converter latency and hidden buffers, which is what CEntrance reveals.

Woah, OK Bredo, are you saying that it is just like a mathmatical problem, as in fraction reduction? That 4/4 = 2/2 = 1/1?

If this is the case, where is Jose coming up with his reported numbers in the CEntrance latency tool? Jose, you said you were seeing the exact numbers in the test and in Cubase? Right? As in 1/1 = 1/1, correct?

Jose, my loop connection is: Left Mains Out > channel (3) in, using an Evidence Audio XLR 2ft cable.

mr.roos,

Please read the PDF file I linked above. That explains how latency works (that’s if you’re really interested in this subject). As far as what I said about the correlation between CEntrance and Cubase with my interface:

RME FF400 Round Trip Latency Test set to 64 samples @ 44.1KHz:

CEntrance = 6.15 ms

Cubase 6 Input 2.472 ms + Output 3.628 ms = 6.1 ms


As you can see, Cubase is only fractions of a ms lower than what CEntrance reports. A very insignificant difference, if you ask me. The same was true for all the other buffer sizes at the same sample rate. Notice that the Input latency is different from the Output latency. This is due to hiden buffers, which the Fireface 400 applies on the playback (Output) side only. Other interfaces may have hidden buffers on Input as well, thus increasing latency in order to reduce CPU consumption.

HTH

[EDIT] The other thing to notice is the extra ms added to both the Input and Output. Setting my buffers to 64 samples should give me a little over 1 ms of latency, yet Cubase reports 2.472 ms:

64 Samples / 44.1KHz = 1.45 ms

The extra ~1 ms is from the A/D conversion, remember? So that’s why Cubase reports 2.472 ms instead. If you substract this number to the Output latency, you should get the hidden buffer size. For example:

(Output latency) 3.628 ms - (Input latency) 2.472 = (Hidden Buffer) 1.156 ms

That, 1.156 ms (~64 samples), is roughly what the Fireface manual says this hidden buffer is.

Of course :slight_smile:

But you were missing some crucial information (namely convertor’s latency and hidden buffers) in your calculations.

I’ve read up on the CEntrance (will download and make a test).

Good, that will show you your real RTL.

Of corse you will have an additional latency when physically patching an output to an input (converter latency) on your interface. This equals what your Ping Function with external hardware shows.

And what do you think happens when you record a sound and it comes back out on your speaker monitors? Doesn’t the signal go through your A/D converter, then into Cubase and comes back out through your D/A converter to be played back by the speakers? This is what CEntrance is measuring, which is what your true recording and playback latencies are.

When it comes to “hidden buffers”, I don’t know…(haven’t checked yet)

Don’t worry. Not many people know about these because audio interface manufacturers don’t want you to find out what the true latencies of their products are. That’s why they are hidden :stuck_out_tongue: The only company I know that does report these is RME. There may be others, but I think they are very few.

I don’t know about PCIe interfaces, but I do know that PCI interfaces can have hidden buffers as well. The only way to know for sure is through CEntrance though. It would be interesting to know the results you get from your PCIe interface.

OK, so Jose, forgetting the idea of hidden buffers for a moment please… Are you in agreement with Bredo’s math:

Buffer size : sample rate = Input/output latency.
256 : 44.1 = 5.80 ms (11.60 ms total)
256 : 48 = 5.33 ms (10.66 ms total)

1024 : 44.1 = 23.22 ms


IN OTHER WORDS… Does is not matter that the CEtrance tester sends out a 1024 sample when I set my sample rate to 256? Is the resultant number just a comparative mathmatical equation? I.e.:

1024 @ 23.22 ms = 256 @ 5.80 ms?

Thanks for revising your earlier post, BTW, I will see if I can reproduce something similar. BUT - you are confirming that when you run the Centrance test, that the samplerate you select in the CEntrance ASIO window is duplicated in the test result window? OR ARE YOU DOING A MATHMATICAL REDUCTION THAT BREDO IS SUGGESTING?

Alright thanks for the insight and I, too, would love to see the CEntrance test report back from Bredo!!

Also, setting my ASIO driver to 64 samples (@ 44.100Hz) produces 1.45ms in the CEntrance tester ASIO window.

When I execute the test the results are: 414 samples/9.39 ms.

If the math that Bredo suggests is valid, 414/64 = 6.468. If I then divide 9.39/6.468 = 1.45 . ?? So this means my actual RTL is 1.45 ms @ 64 samples/44.100Hz?

Have I screwed this up, is the sample rate/ms lantency NOT reducable via a common denominator?

Correct!

When I execute the test the results are: 414 samples/9.39 ms.

This looks correct as well.

If the math that Bredo suggests is valid, 414/64 = 6.468. If I then divide 9.39/6.468 = 1.45 . ?? So this means my actual RTL is 1.45 ms @ 64 samples/44.100Hz?

Have I screwed this up, is the sample rate/ms lantency NOT reducable via a common denominator?

Now here’s where you went wrong. The 414 samples (which is equal to 9.39 ms as reported by CEntrance) IS your RTL. There’s no need to go further than this, unless you also want to find your hidden buffer size.

What you did was divide the total reported RTL sample size of 414 by the buffer size of 64 samples, when what you should’ve done was divide 414 by the sample rate (44.1KHz in this case). That’s why you got a lower number (6.468 ms instead of 9.39 ms). Again, the correct formula to find your RTL is:

(Input Buffer) + (Output Buffer) + (AD/DA latency) + (Hidden Buffers) = (RTL)

For example, if you set your interface to 64 samples of latency @ 44.1KHz you would get the following:

(64 samples) + (64 samples) + (~88.2 samples) + (???) = (RTL in samples)

In your case, this was a total of 414 samples. To get how many milliseconds this is equal to you need to divide your RTL in samples by the sample rate:

414 samples / 44.1KHz = 9.39 ms

If you want to find the Hidden Buffers you can use the following formula:

(RTL) - (Input Buffer) - (Output Buffer) - (AD/DA latency) = (Hidden Buffers)

The only way you can mathematically calculate your latency at different buffer sizes, without running CEntrance at each buffer setting, is by knowing the exact figure of your hidden buffers. Otherwise, you will get lower figures than what your real RTL is. Is this why you want to find a common formula (to avoid running CEntrance at each buffer size)? Maybe I’m not understanding what you want with this common denominator you ask of.

Thanks Jose. I think I deduce from your formula that my hidden buffers produce 4.5 ms of RTL. Which surprises me actually. Did I do the math correctly? If I have, then this number will be consistent as I run the other sample rates, correct?

I do appreciate the time you spent on this, Jose. I pursue this because I want to know what’s going on. You still did not answer my question: My test window shows 414/9.39 ms while testing for 64 samples - your test window shows ???/6.15 ms

I guess, too, I could just subtract your ms number from my ms number, given the same test and find the difference: 3.24 ms. This might mean (?) that my hidden buffers are actually producing 4.74 ms of RTL vs. your 1.5?

That sounds like a high number to me too, but that doesn’t mean it is wrong either. If you were performing these latency tests because you felt your system wasn’t giving you good latency, then those results confirm your suspicion. And, yes, the hidden buffers should remaing the same for the rest of the buffer sizes as long as you test with the same sample rate. Higher sample rates will lower your latency at the cost of more CPU consumption.

I do appreciate the time you spent on this, Jose. I pursue this because I want to know what’s going on. You still did not answer my question: My test window shows 414/9.39 ms while testing for 64 samples - your test window shows ???/6.15 ms

You can get the number of samples by reversing the ms formula. Instead of dividing the sample size by the sample rate, you will multiply the sample size (in milliseconds) by the sample rate. In my case, this would look as follows:

6.15 ms x 44.1 KHz = 271.215 samples

So 271.215 samples is my RTL figure when my latency is set to 64 samples @ 44.1 KHz.

I guess, too, I could just subtract your ms number from my ms number, given the same test and find the difference: 3.24 ms. This might mean (?) that my hidden buffers are actually producing 4.74 ms of RTL vs. your 1.5?

Correct! And this is why RME drivers are so praised. They were the first interface company to successfully lower the latency of FW, and now USB with the Fireface UC, into the PCI realm. The difference in latency between my FF400 and a PCI interface is less than 1 ms. That’s it. Very few companies have been able to match this, even today. I believe MOTU is another company who’s been successful in getting good low latencies on their FW products. There could be others, but those are the ones I know have it for sure.

Take care!

Oh, i just realized that my numbers are influenced by the 32 bit rate that I am running. So I will save this post and do a comparative test when I switch to 64 bit Win7. I think my 9,39 ms result will be reduced which would be nice, and who knows, the Mackie product might come a little closer to the RME?

I revisit this then Jose, and thank you again for all your insights and follow through!!

The only way you will get different results is if somehow Mackie wrote better 64 bit drivers for your interface than the current 32 bit ones. Otherwise, don’t expect much (if any) difference in latency. Do let us know how it goes though.

And you’re welcome! :slight_smile: