Depends on what you are wanting to achieve, but for the most part they are similar enough to not care and the most important thing is “sound quality NI KA 6 is bit better”.
Yes that is true. I am just curious what those latency mean. Input as in when I press button and how long it takes to register? I really don’t know what they mean in real world. But , you are right, difference is so minimal I probably cannot hear the difference in latency.
Latency is mainly an issue when you are trying to do something live (in real time). A few scenarios to consider to help assist understanding:
VSTi playback - output latency measures the time between, say a MIDI keyboard note being played and you hearing the sound through your monitors. You get the same thing on a real piano, time between hitting a key and the hammer striking the string. If this becomes to big you will find everything feels a bit sluggish and may have difficulty playing faster passages. I find anything 10ms and under is more than fine for me and can indeed play with more if need be.
Moving a cubase fader, EQ, etc to record automation - same as the VSTi, the output latency will have impact the time between movement and hearing the resulting audio change. I find I can tolerate much higher output latency, much higher than the VSTi case (in the order 100ms).
Recording audio (from vocal mic or guitar or similar) AND using VST effects for monitoring that audio as you record (e.g. guitar effects or reverb) - In this case you get the cumulative impact of input + output latency. I find vocals require a lower latency, even lower than the VSTi case, or it all gets a bit weird to sing.
Note the penalty on system resources for having very low latency is very high. You will find VSTi polyphony is significantly reduced for any VSTi on the realtime path. As such you should expect to change your latency settings depending on what you are trying to achieve (say, smaller for recording and larger for mixing stage).
You can also use external effects or onboard DSP if the interface has it for realtime monitoring FX when recording. In my case I have an interface with onboard Guitar FX and a Vocal chain with compression and reverb. The DAW records the dry signal but I get to listen to the effects without concern for the DAW latency setting.
Oh, quick note, whatever the latency is, the SAW automatically compensates for it so everything stays in time.
So in general, when you’re recording lower the buffer size so you’ll get better latency. When you’re mixing/exporting increase the buffer size so you won’t get any dropouts.
Also in general latencies of around 5 ms and below are not perceivable for most people. It also heavily depends on what sounds you are triggering? It should be obvious that percussive sounds will be more critical than a lush laid back string pad.