Best workflow please - Match MIDI notes onset/offset between two separately recorded tracks

Hi -

I have a MIDI track I’ve recorded while listening to an audio vocal melody - the track is harmony notes that could accompany the audio vocal track.

I’ve then extracted the MIDI from the audio, but comparing the two tracks I notice my recorded MIDI notes didn’t quite have the same start and stop times that the audio–> MIDI track notes do.

I’d like the start and stop times for the notes on each of the two tracks to match exactly.

What’s the best way (as in least labor intensive) to adjust those start/stop times to do that please?

Thank you in advance!

If I’m understanding correctly that you want to match one MIDI track’s timing to the other, using the reference track as a groove for quantizing may be a way to do it. Here is a video that shows this in the context of drum grooves:

But, if the parts are close to one another in the timing (but just not identical), I don’t see why a similar thing couldn’t work for melodic parts. (I typically use the groove quantizing to tighten, though not make timing identical, certain musical parts against a drum part, for example tightening a bass against the drums. I think it could also work with audio, but all my instrumental tracks are MIDI.)

1 Like

This seems like it is going to be wonderful, @rickpaul . This is the 2nd time you’ve come out with an excellent solution for me in a very short period of time … many thanks to you sir! :astronaut: (“rocket science!”) :slight_smile:

(You can skip the rest, it’s just there for documenting:

I tested it on a different kind of melodic part - MIDI pianos. I duplicated the MIDI piano track, created a second PianoTeq instance and routed the duplicate MIDI piano part to the second PianoTeq instance, reversed the phase of the second PianoTeq instance, and heard the expected comb filtering (apparently the two PianoTeq instances didn’t quite generate the identical piano sounds when fed the same MIDI info).

I then applied a very exaggerated swing to the duplicate part, and as expected heard the comb filtering disappear and the overall loudness increase.

Then I dragged the swingy MIDI piano track to the Quantize panel, selected the original “unswingy” MIDI piano track, hit Q for quantize, and yay, back came the comb filtering … the swing timing changes of the 2nd piano track got applied to the original one!)

Next experiment … see if I can drag the audio of the melody into the quantize panel and see if Cubase will apply the timing of that to the MIDI track I recorded of the harmony notes!

1 Like

Interesting on the experiment. I’ve really only used this with tightening musical parts against drums. However, in a “past life” (when I was still using SONAR), I actually did something similar with audio, to tighten Steinberg Virtual Guitarist parts (after rendering since quantizing their MIDI wouldn’t be helpful as the notes get generated by the virtual instrument from chords you play in MIDI plus keyswitches to tell it which pattern to choose) against drums.

On the audio vocal tightening, there will likely be easier/better ways to do something similar in Cubase and/or with modern add-ons. I think Cubase Pro’s VariAudio has a function to do that sort of thing in audio. However, the one time I tried it, it was way off – not even close to my usual method, which generally works well.

I’ve long used a Synchro Arts product called VocAlign (currently VocAlign Ultra, but I’d started when SONAR had a very cut down version of VocAlign, maybe under a different name, built into their high-end version of the DAW) for vocal tightening, and it is extremely good on that front. It works as an ARA extension in Cubase, and you add an instance to any tracks or clips you want to align (and also to the track/clip you are using for reference), then drag the vocal you want to use as the reference (typically a lead vocal if matching harmonies to the lead’s phrasing) to the reference part of it and any other tracks you want to align to it to the part for tracks being adjusted. There are various presets for degrees of tightening, and you can also adjust other parameters (e.g. formant shifts), and sometimes (most of the time?) it is just magic how well it does with no further tweak. Other times, you have to set some synchronization points between the reference and instance being tightened to it, but the end results are generally very good.

Hi @rickpaul ,

It’s maybe a bit weird, but here I’m using the audio as a template to correct MIDI timing. The use case is MIDI notes played that tell Waves Harmony what harmony notes to sing. My MIDI harmony note ons and offs didn’t match the vocals very well, so that’s what this was all about.

I have Revoice Pro (“RVP”, same Engine as Vocalign Ultra, I believe), and I love it like you do, including its amazing and amazingly simple ability to match timing. I may wind up using it when all is said and done, but at least as of today I was thinking that if I could get by with the harmony voice being handled only by one processor (Waves Harmony, which was needed no matter what to generate the vocal Harmony) instead of two (adding RVP) it would probably be better. :person_shrugging:


Rubenstein Goldberg, III :smile:

(By the way I’d never even opened the quantize panel before, what a great intro to it, thanks again!)

1 Like