Latency setting priority


I’m a little confused about input configuration settings. I own a Focusrite 6i6. When I read an article about reducing latency I was given the impression that setting the buffer size or latency in millseconds essentially achieve the same thing. What this article failed to state is which one takes priority.

I would like to know if I have been misled. If I have, can someone please explain the relationship between these two settings. If I haven’t, which setting takes priority?

The topic of latency settings has confused me for some time and I would like to understand it better. Latency problems have put me off using audio software for some time and I would like to put an end to that, or at least have the ability to effectively troubleshoot issues with latency.


In general, the latency and the buffer size is the same. If you decrease the buffer size, also latency is decreased, and vice versa. The latency is the time (in ms), which the system needs to process the signal. This is the delay between the input, and output (to-/from the sound card).

The buffer size is size of blocks, which can be processed at once. If your buffer size is higher (the block is bigger), the time needed to fill it, is longer. But then it is process at once.

So the praxis is to decrease the Buffer Size during the recording, to decrease the latency time. Once all tracks are recorded, we increase the Buffer Size, which increase the latency, but we don’t care any more, during editing. Thanks to this, we can use more plug-ins without dropping-out of the sound.

Thanks for the reply. I was confused about the priority (as I assumed they were basically the same thing). I managed to find a PDF which explaned that in the version of Cubase I was using, on Windows the buffer size setting takes priority. For Mac the device control panel setting (latency in ms) takes priority.

I appreciate you taking the time to help :slight_smile:


Could you send a link to the PDF? From my experience, this is exactly the same on Windows and Mac. On both of them it is always mainly the Buffer Size. And this buffer size makes the latency, in fact. So from this point of view, the Buffer Size has higher priority (but in fact it’s the same, two sides of one coin).

There is one more question, what do ou mean by “priority”? You cannot set the Buffer Size and Latency level on one device at once. Ot can you?

Sure. It is page 48 where it says this.

I think with the Mac they are saying the adjustment is handled by the audio device menu (device specific) whereas on Windows the setting set in the DAW is the one that is used (presumably with whatever setting is selected in the device menu being ignored).


thank you. This is the most important sentence for you: “The size of the audio buffers affect both the latency and the au- dio performance.” So the Buffer SIze is the main parameter, which affects the latency. Some soundcards allow you to set the latency (because this is more from the USER point of view; users don’t care about the buffer size, they want to set the latency, and the don’t care, what is behind it).

It is the same on Mac and on Windows. The only one difference is, on Mac, you can control the Buffer Size directly from Cubase (btw this option is not in current Cubase anymore, you always have to go to the Control Panel of the soundcard).

The version of Cubase I have is an ancient one that was bundled free with a guitar pedal. This is installed on my desktop PC. I have a more modern installation of Ableton that came bundled with my audio interface installed on my laptop. I will be using the laptop for recording as 1) My neighbours aren’t very tolerant of noise and 2) My desktop PC is so loud that it would bleed into any recordings I make.

I wanted to understand how to record onto my desktop to try messing around with a few basic things like compression, EQ and reverb on a recording made with a headset mic I had just bought. I also wanted to see how low I could get the latency without having issues.

In this case quality was of no concern. It was just to practice some basic tweaking, so much so that I didn’t even connect my audio interface. I was trying to use my Sound Blaster Omni. I later found out that it only supports ASIO output anyway so I abandoned the idea.


I would recommend to record the signal without any FX. Of course, you can use it as a monitor FX, for the better feeling during the recording. So you can apply these FX as an Insert on the recorded Audio track. But please, don’t apply them directly to the Input Bus (Channel). Probably it wasn’t your idea, to apply it to the Input Channel, but it’s better to mentione this.

Settings of the Buffer Size doesn’t affect the signal quality at all. If you can hear crackles, and pops, these are not in the recorded signal.

This is really useful information, thank you. I must admit that problems with pops, crackles and latency have been issues that have put me off recording using my computer in the past. However it has become apparent that recording via computer is the future as dedicated equipment is becoming rarer, plus the amount of features such as plugins and VST instruments etc make computer based recording very appealing.


I’m sorry, maybe it wasn’t complete clear, from my side.

Pops don’t affect it, if you are recording Instrument tracks (or MIDI in general). If you are recording an Audio tracks, these pops might cause drop-outs in the recorded signal. If there are drop-outs in the recorded signal, you should be inform by dedicated message.