page 136: Note that AudioWarp does not...

Hi,
I am trying to learn about Cubase’s multi track time alignment capabilities. I am working with multi mic’ed drum tracks.

I saw a lot of tutorials about Quantizing the audio with a slice process. That seems so old fashioned. I want to use a time stretch type solution because my tracks have a lot of bleed etc.

I was reading about AudioWarp Quantizing Multiple Audio Tracks on page 136 of the manual and saw this note: “Note that AudioWarp quantizing does not maintain phase coherence”.

Yesterday I used this process on a group of tracks and it sounded pretty good. I was also amazed at how much quicker and automatic it seemed compared to group track warp processes in Pro Tools, so now I am very surprised to read, or think, that the process isn’t phase aligned. It seems like Pro Tools does this type of process while maintaining the phase relationship of each track in the group. I can’t imagine how Cubase could provide the function and not maintain the time relationship across the group. Isn’t that a primary goal when working with more than one track?

I am wondering if I am reading this note out of the context of its intended message. Does anyone know more about the details of using AudioWarp on multiple tracks?

Thank You!

It’s hard to see how, if applying warping algorithm to several WAV files, they could possibly maintain true phase coherence!
The algo is doing what it does to each track according to mathematic rules based on frequency/duration, etc. So, in order to sound good, it will have to do slightly different things to each track.

I doubt that Pro Tools’, or any other DAW’s algo can promise true phase-coherence after stretching. I admit, I haven’t read the PT manual. Would love to be proven wrong!

That being said, I’ve had great results using time stretching-as minimally as possible-on drums. Just because you might lose some phase coherence doesn’t mean it’ll necessarily sound bad.:wink:

Thanks for the reply.

This is exactly what I was optimistically thinking. I have seen how the time stretch algorithms do indeed introduce minor timing shifts and so it occurred to me that as each track is processed that there will be very minor differences.

It may be the very same with Pro Tools, but in their docs they emphasize how the “Warp Markers” that you have chosen are used as timing referenced by all the tracks in the group and so the idea is that this cross reference provides time alignment.

I am hoping to learn that Cubase uses a similar process to provide a best case scenario and that Steinberg is simply more exact in their description of the final results when they explain that phase coherence is not maintained.

As I say, I was amazed at how easy and automatic everything has been, but I am finding the documentation to seem a bit confusing. There seems to be a predominance of tutorials describing the slice method of quantizing and the tutorials I am finding about AudioWarp quantizing seem confusing as recent versions of Cubase seem to have slightly different choices and particulars. It seems as if there is a lack of AudioWarp quantizing info for the specific Cubase 7.5 version that I am learning on.

I stumbled through it yesterday using a v6.0 tutorial I found… and it worked really well.

I’d just like to learn more and feel like I have mastered the technique.

Thank you for your help!

My pleasure, Citizen. I’ve found that the manuals have been supplying decreasing amounts of deep information on many subjects-as if features were added after they finished the manual!:wink:
And yeah, I see what you’re saying, if you don’t make slices, are all tracks within a linked folder like that, referencing the exact same points in time? Great question.

For me, I’ve almost always ended up with the slicing method anyway—in order to minimize processing—and going with the “close gaps” method of timestretching. This is, of course, only useful for correcting a drummer—not changing tempo. Which are you trying to, BTW?

I have two goals.

To preface, the thing that attracted me to Cubase was my perception that the tempo matching tools are the best around.

Yesterday I was working on a project that started with an acoustic guitar. I used it to make a tempo map that matched the guitarist. I added a chord track and several MIDI parts. That worked great. I played drums to it and used the AudioWarp to tighten up my drumming to the custom tempo grid that was suggested by the acoustic guitar track. The results were just as I had hoped for.

For two decades I have struggled with either working on a steady grid and feeling the music was static or ignoring the grid and thinking in terms of markers and absolute time. I am excited to see that the two approaches are finally converging in way that has seemed like an obvious goal since the beginning of DAW. I am very excited to see that Cubase has found a way to make it happen.

I anticipate that I will work with various sources as scratch tracks to define tempo and then move forward from there.

The initial tracks may be drums, and so I’d extract tempo from them, or they may be guitar or keyboard.

I am hoping this type of work flow will allow each song to evolve in the manner I use when I don’t look at a grid, while enjoying all the power and ease of using a grid on free play timed music.

I am impressed to see how Cubase seems to address ideas that help musicians feel comfortable in the production process.

Yeah. There have been major advancements in this area. And, as always, Cubase is an innovator. That being said, for years Cubase really lagged behind on pitch/time stretch. After using Acid or Live to play with loops, I’d want to cry doing the same in Cubase. I still find it a bit lunky and, really, I avoid timestretching whenever possible! LOL. Just don’t trust it. I’ll always slice or use REX files if I can.

But now, cubase is really coming into its own in this area. Glad to have you in the Cubase camp! :wink: