Steinberg Loop Back test - how do I use it?

With all the recent talk on the forum about timing, I came across references to the loopback test:

ftp://ftp.steinberg.net/Download/Test_Projects/

I think I understand that this sends audio out and back in to Cubase, so as to get the “exact” latency … then a correction can be made … is that correct?

If so … I’m having a hard time thinking of the physics of this … what problem is solved by making the correction suggested by the loop back test? In other words, without making that correction, what is going to be wrong?

Thanks -

http://forum.cubase.net/phpbb2/viewtopic.php?p=23991#23991

Thanks, Split. I’m afraid I am in the same position that the moderator Jerome was in:

Hi Cristian !

I must be a bit thick on that one but I don’t understand what really is the effect of this different latency time for ins and outs. I own an E-mu 1212m, which indeed gives a different latency time for input and output, but, when recording audio I don’t notive any significant misplacement of the recorded material.

Does it mean that if I record a guitar note at, for instance, 3.0.0.0, it will be effectively recorded earlier or later by a few samples ? Does it mean that I will play on time while recording audio and that, when playing it back, it will be shifted and not in time with the rest of the recorded material ?

I’ve had my copy of SX 3 for only a few weeks and haven’t had time to make any “serious” recordings yet, so I have difficulties to understand what the problem really is ;o)

Please, could you explain a bit, so that I can relay this information on the French Cubase mailing list. Not all of our readers here can read English and might not know about the problem. I’d like to be able to explain it clearly ;o))

Thanks a lot…and keep on working in this way, this is a good thing, indeed.

Cheers.


Jérôme.
http://www.espace-cubase.org
Espace Cubase - Cubase dedicated website.

Also - I’m not sure if the “ping” test and the “loopback” test are measuring the same thing/redundant, so whether making the correction for both would be incorrect (or not).

Thanks -

I used to use this in Sonar, to correct for record timing placement, but it should be the same for any sequencer. It’s a free utility by CEntrance that uses a ping to measure the round trip latency (RTL) of your system. Here’s the link:

http://centrance.com/downloads/ltu/

Download the utility and read the instructions found on the link in that page and test away (it’s really simple). You’ll need a physical loop (a cable going from Output to Input on your interface) to perform this test. Once you get the RTL results, compare this to what Cubase reports and adjust accordingly in the ‘Device Setup/VST Audio System’ window in Cubase.

Actually, this is something I have neglected since I got Cubase last year. I will be performing this test myself :slight_smile:


HTH

1 Like

I’m hoping that these days for normal DAW use, ie not using external FX, the problem of wrongly placed audio is over.

I think it used to happen when there was a big difference in ASIO input and output latency figures ?

The loopback test could be of interest when testing the ping value for external FX

Split has said it all! :smiley:

You guys were right. Cubase (at least with my interface’s drivers) reports the correct latency, so I didn’t have anything to compensate for. Still, I believe it is good to know that your system in performing as it should. I try not to take things for granted :wink:.

OK, I did the audio loop back test. A square wave of 1 sample duration is on the output audio channel (drawn in with a pencil tool), routed to an output in my interface, connected with an audio cable directly to an input, which is record-armed. Play the square wave >>> record.

Some observations:

  1. The recorded audio track - it’s hard to say where the signal starts. There are two or three samples that are low amplitude, but definitely bigger than the background noise, then one clearly dominant square wave-looking sample. I think I should use what is the beginning of the sample (i.e., the one not so prominent, but clearly signal more than baseline noise). Using that: there is a 5 sample transit time/delay between the output and the recorded audio signal. So I entered “5” into the Devices/Device Set Up/Advanced/Record shift, and now they line up perfectly.

  2. Cubase reports: Input latency - 6.531 msec ; Output latency - 6.485 msec. That calculates out to a difference of 2.03 samples.
    But please help me - what does all that mean? The difference is 2.03 samples … the loopback test says my “transit delay” is 5 samples. What is the meaning of this difference?

And I still have the same questions that Jerome had 7 years ago, that I don’t see were answered in that thread then (link in one of the above quotes) or here:

Hi Cristian !

I must be a bit thick on that one but I don’t understand what really is the effect of this different latency time for ins and outs. I own an E-mu 1212m, which indeed gives a different latency time for input and output, but, when recording audio I don’t notive any significant misplacement of the recorded material.

Does it mean that if I record a guitar note at, for instance, 3.0.0.0, it will be effectively recorded earlier or later by a few samples ? Does it mean that I will play on time while recording audio and that, when playing it back, it will be shifted and not in time with the rest of the recorded material ?

I’ve had my copy of SX 3 for only a few weeks and haven’t had time to make any “serious” recordings yet, so I have difficulties to understand what the problem really is ;o)

Please, could you explain a bit, so that I can relay this information on the French Cubase mailing list. Not all of our readers here can read English and might not know about the problem. I’d like to be able to explain it clearly ;o))

Thanks a lot…and keep on working in this way, this is a good thing, indeed.

Cheers.


Jérôme.
http://www.espace-cubase.org
Espace Cubase - Cubase dedicated website.

In other words, if I can paraphrase the late (of this forum) great Jerome -what is the actual effect of this 5 sample delay on recording and playback?

Thanks -

A single sample “spike” (you don’t say what amplitude) is a very severe test. the analog side of your converters will smear the signal due to various reasons, nothing can go from 0 to 1 instantaneously.

A better test signal would be a short audio sample of maybe a cowbell or other click sound.

You can then see just how well it lines up with the original without having to worry (too much) about limited bandwidth, slew rates, etc. (null test)

I agree with Split. You can also use the utility program I linked above, which will give you a very accurate reading of your system’s latency (AND it’s free!).

As far as your other question goes, the difference between Input and Output latency comes from what is known as hidden buffers. I believe most audio interfaces have them on the playback side only (I know my RME FF400 does), which will delay the audio signal by a certain amount of samples. However, Cubase compensates for this as long as the drivers report this latency to it. You only have a 5 sample difference in your case, which is really nothing (we’re talking 0.11 ms @ 44.1KHz here). You can still compensate for that in Cubase, but I personally wouldn’t sweat it. You’ll be better off spending your time making music :slight_smile:.

HTH

You’re right of course, it is smeared. However, I just used the loop back test as downloaded from the Steinberg ftp website at the above link.
I was fascinated to notice that when I recorded multiple passes of the single spike, they all looked identical. Must be an optical illusion, …?

Thanks, Jose … I’m with you, can’t see why 5 samples would be so important. However on the old cubase forum, the issue was debated for many pages over several threads. Unfortunately, despite trying my hardest, I couldn’t figure out by reading many of them what the big deal was.

Why was this issue of 5-10 samples so important that Steinberg put two (three?) Test projects on its ftp site for people to test there system with.

Just wondering!

That’s because I bet you that back then the offset needed to make sample accurate recordings in Cubase was way above 5 samples. I remember going through this same deal back in the day with Sonar, and I remember needing an offset of 96 samples at that point. Some people had greater offsets than mine, around 300 samples (~ 6 ms) to up to 500 samples (~ 11 ms). Now that is a noticeable difference and one that could make you think you’re not recording in time, which means you needed to manually offset your recordings. Thankfully, we don’t have to deal with that anymore :wink:.

Take care!

Thankfully, we don’t have to deal with that anymore

It’s possible your current system has no problems but many modern systems still do.

The latency figures vary quite a lot with different systems, soundcard drivers & with different latency settings.

I mix quite high at 1024 as I use some plugs that seem to need it & on my main DAW desktop at this buffer setting with TC konnekt firewire interface, I still need 109 samples delay to line up.

This is still only a few ms so could be argued I would never notice if I left it, but there’s no reason not to correct it.

I recorded a quick & dirty mobile overdub session on my laptop last year with no real monitoring & it was only 3 hours in we noticed the new tracks seemed a bit lazy compared to the players…turned out I’d accidentally enabled wifi & the recording latency had shot up. (to over 20ms) Fortunately was able to run the test there & shift the overdubs forward by that amount.

So I believe it’s still useful to know how to run the loopback test & to have it available & to check any new computer or change of soundcard just in case.

I totally agree with you, which is why I just recently performed the loopback test myself. I had done it a LONG time ago with my older setup (not that the current one is any newer :stuck_out_tongue:) when I was using Sonar and latency compensation was a hot topic, but I hadn’t performed it after switching to Cubase in the latter part of last year (which I should have done).

I prefer using CEntrance as opposed to a ping test through a sequencer because it is more accurate, it runs stand-alone (no need to run it through Cubase, Sonar, etc), no installation needed and it’s simple to setup (connect a cable from any Input to any Output on my audio interface, open CEntrance, select the corresponding driver, click “Measure”, get results). But latency is definitely something that needs to be checked when you change your setup/signal chain.

Just to report here: on my system, the input is 5 samples after the output, at a buffer of 256 as well as 2048 (both at 44.1KHz). In other words: No apparent sensitivity of the delay to buffer size.

Any idea why the output>>input delay on one system would be so sensitive to buffer size, and another not? Or is this one of those PC things that people have learned to not even worry about?

To my knowledge and limited experimentation with latency in the past, the difference between Input and Output readings will depend on the hidden buffers of your audio interface and how it reports this delay to your DAW via its drivers. Some interface companies do a better job than others, and that’s one of the reasons why we are able to customize this in Cubase (because not all interfaces are created equal). The other factor that can affect latency is the sample rate being used. The higher the sample rate, the lower your latency (but at a higher CPU cost).

Changing the buffer size at equal sample rates won’t affect your RTL (Round Trip Latency) because the hiden buffers remain constant. They (hidden buffers and I/O buffers) only change when you switch your sample rate. Like I said, the higher the sample rate, the lower your latency but at the cost of more CPU cycles. So, NO, it’s not a PC thing :slight_smile:.

Hope that answers your question :slight_smile:

Hi Jose! :smiley:

I hear what you are saying, but I think I’m reading that Grim had a different experience - he was off by over 100 samples at a buffer size of 1024, if I read his post (quoted in mine, two posts above) correctly. It’s that increase from just a few samples at more reasonable buffer sizes that I’m wondering about …?

OK, I admit that I did a really bad job at explaining things in my previous post, and I apologize for that. Instead, this time I’ll give you an example of what I was trying to convey. I’m gonna use Grim’s latency offset figure in this example. The Round Trip Latency (RTL) is calculated in the following way (assuming a sample rate of 44.1 KHz, where 1 ms = 44.1 samples and a buffer size of 128 samples is used):

(A/D converter latency) + (Input Buffer) + (Output Buffer) + (Hidden Buffer) + (D/A converter latency) = RTL

This roughly translates to:

(44.1 samples) + (128 samples) + (128 samples) + (109 samples) + (44.1 samples) = 453.2 samples

Alright, let me explain where I got those numbers from (which are merely approximations and are mostly hypothetical). Usually, the converter latency will be about 1 ms each way (which we establish to be 44.1 samples at 44.1 KHz). However, keep in mind that this value varies from converter to converter as some may work faster than others. Also, the speed of the A/D is usually slower than the speed of the D/A, so they may not be the same value. But, for the purpose of this example, I made them the same value. The I/O latency is a given since that’s what we change in Cubase when we adjust latency via the Device Setup window. The hidden buffer value is simply the number I got from Grim’s offset, and this is usually the value that doesn’t get correctly reported to Cubase by the driver.

So, going by the formula above, if i set my buffer size to 128 samples and need 109 samples of offset in order to make my recordings sample accurate, then I will still need 109 samples of offset when setting my latency to any other value (whether it be 64 samples or 2048 samples of latency) because the converter speed and the I/O buffers will be reported and thus correctly compensated by Cubase while the hidden buffer is not.

Even audio interfaces from the same manufacturer will have different amounts of hidden buffer and converter latency added to them, so it’s no surprise why Grim may need a different offset amount (even at the same sample rate and buffer size) as you and I will need. This is why it is important to check the RTL of your device (which you only have to do once).

Keep in mind that I’m giving you a very simplistic example here as this is a more complex process. But hopefully you get the idea.

Thank you Jose! ^ ^

That was extremely helpful. I believe I had misread Grim’s post to say he had to make an adjustment of “x” samples at one buffer, and “x+y” samples at a larger buffer. But when I went back and read that, he never actually said the 100+ sample adjustement at 1024 samples was different at a different buffer size.

And I think I figured out what the purpose of this loop back test and resulting offset is. Without it, the audio that is recorded will be placed “late” on the track, in relation to the grid and any other tracks already there. So the adjustment drags the printed position earlier in time on the track.

Thanks!

I hear what you are saying, but I think I’m reading that Grim had a different experience - he was off by over 100 samples at a buffer size of 1024, if I read his post (quoted in mine, two posts above) correctly.

Apologies…seems I may have been incorrect regarding a change with buffer settings. Something I read & repeated without confirming with my own tests.
Thanks Jose for the correction & your very good explanation.