"Flatten real time processing" NOT WORKING

Hello, it’s been a while ! I just wanted to tune in and give my input, if that helps. Just note that I’m still using C12, on Windows 11. This doesn’t change anything anyway.

Image :

Top track is the original.
Second track is Quantized (warped, whatever function is used).
Third track is Flattened.

What we visually see on the second track in yellow, is only the stretched waveform as it is displayed when moving warp markers. This does NOT reflect what we actually hear.

When we use Flatten to apply the warp process, the new waveform that is displayed now reflects what we are hearing when the warp process is active and working in real time. This is exactly the reason why people are getting so confused!

The only thing Flatten does is rendering the audio. That’s why the result is exactly the same when using Render instead of Flatten. The only difference is that Flatten takes the raw file and ignores the channel configuration (mono will result in mono… you get it).

When we compare the waveform before and after the render, we can immediately see that the warp process highly distorts the original audio.
I have annotated the same image that is on top of my post :

It adds a fifth cycle. Yes.
Then it starts in sync again at the green line (or red arrow) before going wild once more.

But of course there’s more. And that’s the actual root issue about which I have made a massive post back in the days. The post in question is this one, which is my third most viewed topic after the audio setup guide and the bug wiki (the latter is now obsolete, I gave up) :

Image taken from the user manual :
image

It says : “plays back in exactly the same way”.

Except that when we playback both the real-time and the rendered audio events simultaneously, and invert the phase of one of them, we can instantly notice that there is definitely a difference between the two since they do not cancel out! Moreover, we can hear the phasing effect which is discussed on my other topic “Phase drift and glitches”.

My final guess… drum rolls … is that there is something wild in the real-time algorithm that makes the audio move back and forth, I mean just like a slow LFO that very slightly slows down or speeds up the waveform. I don’t know how to explain it, but this is what causes the phasing effect when trying to do a null test.

Soooo, since the audio “oscillates”, what happens when we render it and visualize its definitive waveform? Well, it’s all over the place!

Additional examples :

Same audio file but at different quantize points. Sometimes, but not always, it starts in sync but again the waveform quickly starts drifting/distorting.

Honestly I don’t even know how it manages to sound that close to the original because the waveforms are really not the same at all. Like, in my first screenshot at the top of the post, I get 5 cycles instead of 4 over almost the same length of time, but the pitch don’t change!? I understand that it must alter the waveform in order to retain the pitch, but that’s inconsistent.

I forgot to make more screenshots but the transients can also start late to the quantized event. When I say it’s all over the place, it truly is, and it is highly consistent with the phasing thing I’ve discovered.
We’re finally starting to prove the things right here.
That’s not theory, that’s facts.

Why not develop a better algorithm, specifically for drums for instance?
It could detect what is the hit and what is a silence (threshold tweakable), and move the whole hit around the silences. In this case it’s the silences that are stretched and the actual hits would remain vanilla, without any timing issue whatsoever. Of course this is not possible for other sustained and more complex material. But the issue about all of this is mainly about the drums, I guess.

Farewell

EDIT :
I’ve been following this topic since it was created, yes I am now a ghost.

  • If you flatten/render the same event multiple times, you’ll get different results each time.
  • I always thought that MPEX algorithms were not supposed to be available for AudioWarp, this is meant for the Time Stretch and Pitch Shift offline processes, so I don’t know how you guys managed to select it in the Flatten dialog. In the case it is compatible, then the conditions under which it becomes available should be documented somewhere, but it’s not.
    Maybe for very short events, because if you use MPEX on a one minute long event it would take several times longer to process, which is not realistic when quantizing multiple drum tracks at the same time… In order to save time, just ask the drummer to play better and it would only take a few minutes to get it recorded properly. :joy:
    Anyway it seems that MPEX gives all over the place results too.
    To me this looks like another Cubase bug, because MPEX with Time Stretch is meant for linear stretches, in one go. Not for several markers that stretches your audio in various places with different values.
4 Likes