I’m using a Sequentix Cirklon to sequence a selection of analog and MIDI equipment, which are all fed into Cubase 9.5 for effects and recording. The Cirklon is synchronized to Cubase using Expert Sleepers Silent Way to generate stable and accurate MIDI clock. It’s a setup that I’ve been using for almost 10 years and it works very well. I’m using a Macbook Air running macOS 10.11, with a Fireface UC. The buffer size is kind of large, 256 samples, but since I’m monitoring everything through Cubase the latency isn’t a problem for me.
Recently I decided to try sequencing some VST instruments using the USB-MIDI interface on the Cirklon (it appears as a class-compliant interface with 4 ports). However, I’m finding that the amount of timing jitter is too high for my tastes. The jitter is something like +/- 2.5 ms, which is too much for tight drum programming. I understand that some amount of jitter is expected with USB-MIDI due to the fixed polling rate employed by the OS. I would expect that jitter to be something like +/- 1 ms, but I am seeing more than double that amount.
“Jitter” is not quite the right word for the phenomenon, it’s more like a cyclic drift or a form of aliasing. See the attached image. Each lane shows one successive hit in a 4/4 pattern of a 909 kick sample being triggered by the Cirklon. I stacked them vertically to make it easier to compare where each hit falls on the grid. Each successive hit drifts off the grid by about 0.5 ms, and there is a periodic “correction” that produces a big timing glitch. The difference between the earliest and latest notes is ~5 ms. Reducing the buffer size to 64 (the lowest setting that works with my under-powered Macbook) does reduce the magnitude of the jitter, but doesn’t completely fix it.
But here is the interesting part: The jitter is only evident in the audio output (in this case, the output of Battery 4). When the same stream of MIDI notes is recorded to a MIDI track there is far less jitter evident in the recorded notes – the jitter is < 1 ms. This is of course without any quantization applied to the input or the recorded notes.
I could certainly be wrong about this, but what seems to be happening is that the timestamps associated with the input events are being ignored during live input monitoring. The timestamps do seem to be used to place recorded events accurately on the grid. But live monitoring does not get the same benefit.
Perhaps the motivation for ignoring the timestamps is to play live notes as quickly as possible to reduce latency? If that is indeed the case, I would gladly trade a small amount of extra latency for reduced jitter. It’s very easy to correct latency, but virtually impossible to correct jitter once it has been injected.
Ha, sorry for the essay! Any thoughts?