[BUG] Recording Midi at ANY latency on ANY system (sync)

Please excuse the thread-drfit.

I’ve been thinking about what robotpriest said, that Cubase should compensate for the tendency to play early because of VSTi latency delay during recording.

I don’t agree, but it’s an interesting idea which goes something like this.

I’m playing my VSTi along to Cubase at a latency setting of, let’s say, 256 samples. At that setting there’s some delay but it’s still playable. Because I’m listening to the VSTi, I’m playing ahead of the beat so it sounds in time with the track. As we know, this means the MIDI will be ahead of the beat and when I play it back it will sound bad.

Cubase could, I guess, automatically print the MIDI around 6ms later than it should be, so that my ‘early’ playing is compensated for (Cubase knows what the latency setting is, and can use that to set this auto delay) - in theory it will be nicely in sync when I play it back.

I don’t believe this would work in practise.

Firstly, it would only compensate for the buffer-related delay - as conman pointed out, Cubase can’t know when you actually HEAR the VSTi note.

But the main thing is that the delay actually messes with your timing in a way that a simple offset would not cure.

I really think the only way is to minimise the latency delay - keep the buffer as low as possible and remember to Constrain Delay Compensation when recording a VSTi.

I don’t understand what you’re saying, there.

Anyway, I see that you didn’t answer my question about whether or not (a) you really do know that it’s not feasible that the early placement of MIDI data could be caused by performers playing early, or (b) you simply believe that to be the case. If it’s ‘(b)’, you might, of course, be wrong. But if it’s ‘(a)’, please explain - please share your knowledge. I’m trying to keep an open mind about whether or not that could be a significant cause of early placement of MIDI data. It seems plausible to me, but if you can prove that it’s not a possibility, please do so - that’s then something that can be eliminated. On the other hand, if you can’t prove that, please stop saying it’s not a possible explanation.

Re the quote above, please elaborate. Did you mean, for instance, that:

1)Cubase doesn’t direct the MIDI note-on to the VSTi until after it logs the note-on in the time line, and then it’s too late to alter the placement?

  1. During recording, Cubase doesn’t have the information that it does have during playback about how much time passes between the MIDI note-on and the start of the corresponding VSTi note, so even if the programmers wanted to create an option to delay the placement of the MIDI by that much during recording, that wouldn’t be possible?

  2. Even by the time the recording ends, Cubase still doesn’t know what that delay is between the MIDI note-on and the start of the VSTi note (ie whatever is used during delay compensation in playback), and it would therefore be impossible for the programmers to make an adjustment, after recording, to delay the placement of the MIDI by that amount?

Or am I barking up the wrong tree? Was it something quite different that you meant?

Does that article say that it’s impossible for the early placement of MIDI to be due to the performers playing early, or that, even if that’s possible, Cubase can’t do anything about it?

I always believe you - as I do anyone else on the forum - when it’s clear that what you present as facts are truly facts and not just opinions.

you really do know that it’s not feasible that the early placement of MIDI data could be caused by performers playing early, or

Of course it is. Especially when Cubase is working properly.
The thread’s premise (not mine alone) is that notes are being misplaced and that is what I am working on. Because I can’t reproduce it on my system I’m trying to hunt for a setting scenario that would produce that here.

Note misplacements are what the OP is concerned about and whenever notes are played is of no matter.

1)Cubase doesn’t direct the MIDI note-on to the VSTi until after it logs the note-on in the time line, and then it’s too late to alter the placement?

Well tell me how it could be any different. Anyway, as this involves midi note misplacement it doesn’t matter what sounds or even if there’s no sounds. The OP is saying that he plays his keyboard at point X and the note is then placed at x minus X. Notes at higher latencies will vary on their timeline placements’ offsets (more noticeable than at “normal” latencies) as the midi comes in packets and are not always in the same place within the packets. You’ll always get variation but usually not noticeable. You know. Like real live playing.

Does that article say that it’s impossible for the early placement of MIDI to be due to the performers playing early, or that, even if that’s possible, Cubase can’t do anything about it?

No. Why would it? And if it was down to the performer’s playing then no, Cubase couldn’t do anything about it as it’s not a Cubase problem.

These questions? Are they your opinions? :mrgreen:

A word on why higher latencies as it has been mentioned you need higher latencies for mixing etc.
As from Cubase 6, or even 5 the bar on system requirements went up quite a bit. So as soon as I saw that I realised a new system was needed by me to run things reliably, and it does very well.
I’d say that from my own experience and that of other local Cubase users who use anything from combo sized Projects to orchestral mid compositions that you need to look very carefully at what you need to do the job you want to do.

The more things you want to do with any high-end DAW the more power, and hence money, you need. That or a very good knowledge of system resources.
So if you are running a Quad core when you really need an i7 system for doing the odd orchestra library piece plus FX then you must expect the odd strange problem to crop up.
And if you have to change the latencies often then I’d say look to your system for a possible overclock or upgrade.

Why am I on about latencies and systems? The thread title.

Not to interfere with this entertaining disagreement, but it’s pretty easy to test midi record placement by looping a midi cable and recording quantized notes from a DAW back into the same DAW and comparing them.

What you want to do is repeat that test a few times with buffer settings from very low to very high and see if the record placement moves around when you change buffer settings, it shouldn’t. There may be a tiny offset there between the recorded notes and the original notes - not unusual - and IIRC most pro DAWS have a record placement adjustment specifically for that so you can adjust for your specific system, to align the timing for midi and audio to account for things beyond it’s control.

Run a midi looback test and see what you get. You’ll know for sure what the orignal note timing is and where it should be. If you want to go farther, don’t quantize the original, use note intervals that kinda fall all over the place, and see if the result - even if it’s all offset by the same tiny amount - isn’t shifting around along the way.

A really good stress test is to loop back a track with lots of notes and controller data and paitiently go through and compare it to the original, the timing of it all.

Hope that helps.

If not for ffg, there is not anyone reporting that when they record midi and audio together they get expected results.

To bring the thread back on track a little, can anyone try the test and report back?? You can even use a virtual keyboard to try and test this for the cubase community.

Good morning gentlemen

I just ran some new tests to check yesterday’s results.

On the studio MacPro, which currently runs a Fireface 800, I ran the simultaneous audio and MIDI recording of repeated keystrokes in Cubase 6.04 and Nuendo 5.52, at different latency settings. The results are entirely consistent - the MIDI note is recorded around 10ms later than the sound of the keystroke. The keyboard is a M-Audio Oxygen 25 (absolute pile of carp, since you ask) and it normally runs through a USB hub. For this test I USB’d it directly to one of the 2 front USB ports on the MacPro (which runs Snow Leopard OSX.6.8). Rather surprisingly there was only a small difference depending on whether the USB hub is used - about 1ms less delay when plugged direct.

I then ran the test on a MacBook Pro running Cubase 6.05, and using the Mac’s internal soundcard, using the MacBook’s internal microphone to record the keystrokes (same keyboard).

This setup produces a much tighter result - the delay audio-MIDI is about 5ms. The distance of the mics in the tests from the keyboard is about 9" with the MacPro and with the MacBook about 15". Again, no significant difference at different latency settings (32-2048).

Needless to say, at no time did the MIDI appear before the audio.

Audiocave’s loopback suggestion is a good idea. With no Quantise I would expect some discrepancy looking at numbers but in normal usage I’d expect that inaccuracies should be in the normal range of a live musician’s note playing.

At higher latencies player compensations, coupled with the wider note variations of the midi stream which he has no control over, will be too inaccurate to give a meaningful report on any system anomalies as after he hits the note anything could be going on in the computer system to alter the note placement. Saying that though the note placements under normal conditions should give the player a pretty clear idea of his own playing and so he’d know if any discrepancy was down to his playing or, if it very erratic, the computer and or software overdoing something which seems to be the case here.

A loopback would be a better way to check for system anomalies.

Yes, I agree with that.

Human timing (or the perception of your own timing) is too inaccurate to be used as a basis for that kinda test. That kind of testing with a known source removes the human element, the “perception” of personal timing when playing, and adds a definitive timing reference to compare.

Once (if) you prove the system is accurate for taking incoming notes and placing them correctly, and/or you make adjustments so that it is accurate, anything else after that has to be “perception”, not reality.

The alignment functions are (I think) so you can align what you play with the “cue” (what you actually hear from the output). You should be able to align audio and midi record placement so that both are just about dead perfect, if it’s not. You just can’t always assume that figures being reported to and fro from ASIO devices and similar are accurate, you have to test them to see. The DAW adjusts for what’s being reported, if what’s being reported is a little off, placement will be a little off.

And of course some things aren’t reported to the system at all… like external converters.

This is all based on tape recording. If you patch an output to an input with a tape deck and record the result, it literally records the audio dub at the exact same place, because the signal moves so fast. You can (should) adjust your DAW’s record placement to do the same thing with audio… since the analog ouput that you hear is the “cue”, what you always use as a timing reference when recording anything. It should - just like tape - align perfectly with itself - the source audio - if you record an analog audio output back to a new track.

I think I had Cubase’s audio recording aligned to - with an audio loopback test and subsequent record placement adjustment - 2 samples at 44.1, close enough. Anyway, unplug one of your speaker cables and record something back into the DAW and check the alignment with the original audio, and adjust it to get it as close to perfect as possible… like tape.

Guyz, PLEASE STOP!!!

Why would you guys start adding more variables to the test when they aren’t necessary. Please re-read the OP.

Don’t post suggestions if you haven’t performed the very simple test.

thanks


/---------\ UPDATE /--------
update;
Cubase 5.5.3
Cubase 6.5
Reaper 4.

EMU 1616m
Roland UM3 (midi controller)
Korg MicroKorg (midi controller)
Virtual Keyboard (midi controller)
Windows 64
Both midi timestamp settings (little difference)

everyone of my test results midi far too early, even in Reaper.

My next step is to try a different soundcard.

You really have no idea do you.

Huh? :slight_smile: You obviously don’t understand what I meant (and sorry for the OT).

I use external converters on both ends of my system, inputs and outputs. There’s no way any DAW could adjust for their minimal latency figures unless I tell it what those figures are, if my goal is to align my audio recording as closely as possible. Unlike ASIO which directly reports those things to the software, they cannot do that, since they aren’t on an ASIO driver reporting to the system, they’re external.

That’s exactly what the “record placement” adjustments are there for?

That’s all I meant. And yes, I do have “an idea”. :slight_smile: Again, sorry for the OT, didn’t mean to sidetrack your discussion.

I really don’t want to derail the thread by going through all this theory about RTL or loopback tests. This is a INPUT test problem related to Midi, not Audio.

If anyone would like to help, please run the basic test and report your results and system spec. (no vsti, no monitoring - DO THE TEST BLINDLY)

and please, no more random theory.

How many people have actually done the audio/MIDI test as described?

Said it before…and will again…the 30+ year old midi standard has never played back exactly what you play in. It rounds to the nearest ppqn grid. I have seen no unusual issues with Cubase4 and 6 in this regard.

I did a test for another forum to prove my point a couple years back–record the audio of a VSTi and the midi simultaneously. The audio represents what the player hears and intended. Then render the midi to a second audio track. The timing is obviously changed–and not by an offset amount that Cubase could compensate for…

I’m fact, the GOOD news, was that no matter what kind of latent audio plugs I threw into the project, cubase continued to render with sample accuracy. Although, offline and online rendering yielded two different results–they were consistent to each other each time. This has NOT always been the case with steinberg or other sequencers.

Moral of the story? Only use midi for what you can’t play with your ten fingers. Things that truly need to be sequenced and micro edited–drums, strings, horns…for a normal keyboard part? Just play it and record the audio. Once you get the midi where you want it for those other things? Render to audio.

What do you mean by online and offline rendering? Real-time vs non real-time? If so I don’t usually find a difference.

Agree.

I just did the test - recorded my keystrokes and the MIDI-data with no instrument attached.

At any buffer rate I didn’t have any recognisable offset. I tried 128 to 4096 with the same result.

So it seems that in my setting (Win XP, M-Audio) Audio and MIDI are not treated differently as far as placement goes.

So to examine the other issue, the one that things that are recorded appearently in time with the click are then placed too early, I will do nothing else than record the click into an audio track. Since on my system audio and MIDI behaviour does not differ, this will be an interesting test. (tomorrow)

The click recorded should NOT be ahead of the grid, otherwise we have a problem (as far as I understand this is another point the OP mentioned). If anything it should be too late (because it takes time for the audio to be processed) but not early.

I don’t remember if it was real time vs non real time audio mix down (I think it was) or if it was setting up a buss and printing the audio “live”. Pretty sure it was real vs not export—I’d have tested because BFD(1 or2) requires real time export for the cymbals to ring right–that the same in Sonar, too. So, there IS a difference in real time vs not…why else would they have the option?

this is what I’m getting with a high buffer setting of 20ms.