I’ve read the Cubase user manual, searched this forum and otherwise searched high and low about best practices using Tempo Detection to capture the “feel” of a live performance.
Consider the following scenario. One records freely into Cubase (i.e., does not play to a click), and the part played either (a) is performed with a sound source that lacks clear transients (e.g., a string pad) or (b) is performed with a sound source having clear transients (e.g., a piano) but where the part does not have regular temporal indicators. For the latter case, for example, a freely played piano performance contains a great deal of rubato and the rhythmic progression is greatly varied, alternating, for instance, phrases containing sixteenth note runs, quarter-note-triplet harmonic progressions and cadences ending with rubato-marked whole note-length chords/notes. Further consider in this scenario that portions of the performance will be supplemented by tempo-locked synchronized phrases (e.g., sequenced or arpeggiated patterns) and/or synchronized effects (e.g., LFO rates applied as source modulators, beat-synced delays).
In my experience, I have found it very difficult to adjust Cubase’s Tempo Detection features to capture accurate temporal reference points in freely played performances. Often temporal markers are way off the desired temporal “hit points” in the performance. Therefore, the music is grid-aligned inaccurately to bars/beats, because the tempo values detected are imprecise.
I’ve tried a couple of different approaches, and each has an undesirable side effect. With one approach, I decomposed the performance as follows. For the most part, I played a single line melody with one hand while “tapping out” a harmonically relevant countermelody (e.g., a bass line) with the other hand. The goal was to tap out the countermelody with, for instance, quarter-beat regularity, with the hope Cubase’s Tempo Detection algorithm latches onto the countermelody’s rhythmic pulse. This tended to improve the tempo detection result, although not always. Attempts were more successful when using a transient-rich sound source. The down side of this approach is that it destroys a performance’s “feel”, which often is an unacceptable compromise.
A second approach is to play a performance freely as described before, with or without a transient-rich sound source. Next, assuming we have a MIDI recording, copy the recorded track and decompose the performance into a single line melody. The goal here is not to derive a melody from the performance; it is to extract a series of “hit points” as a guide track for Cubase’s Tempo Detection algorithm. Once done, reduce the resultant MIDI track’s note lengths to the smallest reasonable rhythmic value (e.g., sixteenth beats), and change the resultant MIDI track’s source to a percussive sound. Lastly, run Cubase’s Tempo Detection algorithm on the resultant MIDI track with the percussive sound source. (Optionally, prior to running Tempo Detection, render the resultant MIDI track to audio.) One imagines this approach would yield far better results, but the outcome was mixed. The downside of this approach is the required setup time. After a few tries it becomes frustrating and drains the creative mood.
In summary, the goal is to temporally capture a freely played performance in Cubase, such that the tempo track accurately reflects the performance’s underlying temporal pulse while aligning the performance to bars/beats in order to later apply temporally synchronized phrases and/or FX using sound sources locked to the host’s tempo.
Any best practices or workflow recommendations on achieving this with Tempo Detection?