does cubase 8 has sample accurate automation?

So weird you just bumped this as I was doing a plethora of wide ranging automation tests in logic vs cubase vs pro tools just last night, and Cubase automation timing is horrifying. I can’t believe it’s this bad. But the worst part is, it’s inconsistent.

Just some info… if there is no latency on the track that is being automated, cubase still drifts in and out of time randomly, with asio guard on OR off, at any buffer. But if you put an effect with latency in an insert slot AFTER the effect that is being automated, that will be out of time by the latency of the other effect(s) on that track. This is exactly Logic’s problem.

See for yourself… Automate a filter or something, then throw in a linear phase EQ on the next insert.

I actually prefer Logic in this scenario, and this automation problem was why i left logic to begin with, but at least it’s rock solid if there are no effects with latency, and if there are, logic tells me right in the mixer when i hover above an effect what the latency is, so i can move the automation what I need to. There is also a second work around where I can send that channel to it’s own bus mirror and automate the effects on that instead, as Logic for some reason does all bus automation sample accurate, no matter what combo of effects you are using.

I am really shocked, cause I always lauded cubase’s PDC so much, just how bad the automation timing is. Now the standard drift doesn’t matter much for general automation but if you are doing grid based rhythmic information, it really does.

Pro tools… just WOW… I automated the gain of the same plugin I tested in logic and cubase, creating a volume rhythm effect for a drum loop.
BANG in time.

I then put a sonnox surpresser in the insert before it, the hi res one with 279ms latency, no change. Then i put ofter the effect being automated, still BANG on.
Then i added a fab filter linear phase EQ, BANG ON. I could take it all the way to pro tools’ 16384 maximum PDC (at 44k, it’s 32768 at 88K and up), and i couldn’t get any timing mis steps. Then it reminded me why i moved to PT to begin with 2 years back from logic, as i use tons of UA effects on every track in my mix, as well as tons of automation, and Logic had timing issues. As Cubase does.

Cubase is a joy to compose in but to work with audio and automation (as far as timing), PT kills it. I wish Cubase had PT’s automation timing and audio gain structure.

I am not sure what to do at this stage, as serious mixes with automation are just all over the place in Cubase. I hope Steinberg take this seriously and apply their PDC engine know how to fix this… It also affects sidechains by the way, as i have explained in another topic.

Anyway, i won’t hold my breath, but I think I will compose in Cubase then transfer it to PT and do the mix there.

Notice: I’m writing this thread mainly to think through the possible reasons for automation inconsistency; it’s more of a set of questions & speculations than any answers. That said, I hope it stimulates more posts on this topic, especially DAW / OS behavior comparisons & test results w/ different settings. Perhaps we could break this down further (VST2 vs. VST3; Variable latency vs. set latency plugins, Win vs. Mac., DAW & O.S. preference settings, etc.).
I thought Steinberg had made improvements with automation timing a few iterations back, but perhaps it was just hype or maybe I’m wrong.

Having worked with Cubase since VST5 (2000), I am pretty sure that the automation timing has been improved at some point; I remember noticing major issues back around the SX days, usually after a buffer size change.

Not to excuse Steinberg, but I suppose each DAW has it’s strengths and weaknesses; Cubase definitely has strengths (unlimited track counts, reasonably intuitive midi (unlike Studio One v2.5) and a full featured package (notation software, VCAs, Control Room, flexible interface, etc.); it also deals with multiple high latency plugins better than some other platforms. OTOH, it still has surprising weaknesses - e.g. it has one of the worst integrated Sample Rate Converters of the major DAWs; even Audacity’s SRC is better http://src.infinitewave.ca/.
I am particularly surprised by the reported inconsistent behavior described in this thread. That is something that should be fixed. It’s something that should be fixable. A DAW should update the latency to the automation subroutine when it changes, at least when playback is restarted. Which makes me wonder about the ‘turn off processing for VST3 plugins when no signal is present’ option. Does the reported latency change or is it a fixed value? I’ve also puzzled over numerous instances of different reported latencies (in the plugin manager list) for the VST2 vs VST3 version of the same plugins.

if there is no latency on the track that is being automated, cubase still drifts in and out of time randomly, with asio guard on OR off, at any buffer. But if you put an effect with latency in an insert slot AFTER the effect that is being automated, that will be out of time by the latency of the other effect(s) on that track. This is exactly Logic’s problem.
See for yourself… Automate a filter or something, then throw in a linear phase EQ on the next insert.

I haven’t noticed this with Cubase 8.5 running on Windows 7 Pro; and I’ve inserted a high latency LP EQ after an automated plugin before. As long as the EQ was inserted while Cubase was stopped, it seemed to me that the new latency was reported to the host / automation, but I will surely check this again.

What problems do exist with automation consistency in Cubase may have to do with its ability to handle a wide variety of plugin topologies with a wide range of latencies; Cubase is pretty tolerant when it comes to adding high latency plugins or changing the look-ahead of a plugin during playback (though it still crashes sometimes with major latency manipulation during playback - like switching the DMG Equilibrium LP EQ from 65k to 128k). Perhaps there is a trade-off here - making the program tolerant of latency changes on the fly might interfere with implementing consistently tight & stable automation. The more I think about it, the more it makes sense that an extreme tolerance to latency (and buffer) changes and consistent automation timing are contradictory, or at least potentially resource intensive, with workarounds that could introduce stability problems - the automation routines would have to update their timing along with the audio streams, send & group streams, insert and FX processing, etc. without excessive dropouts, noise bursts or other issues that users would complain about.

If this is so, Steinberg could develop a workaround where, once plugins are finalized, the user could switch to a different automation mode (‘automation > export mode’) where the automation routines lock to a given latency per channel / group / FX / VSTi, etc. The user would have to be made aware of the ramifications, since loading or unloading high latency plugins in this mode would likely crash the program (or at least skew the automation timing again). Perhaps this is already the case with non-real-time export, though comments here seem to suggest otherwise.
Locking automation timing to the reported latency per audio stream before mix-down seems sensible; if the DAW doesn’t already do this, I doubt we will see a change, considering the wide (and ever expanding) range of variables in play, the need to claim stability in a competitive DAW market along with the ever growing complexity of the program.