Automation broken?

Very interesting link comparing fades and automation in the three main Daws - Cubase, PT and Logic.

Its a test to compare the speed and artefacts of fades and automation.

Though Cubase performs fairly well at the fades test, automation appears to be not only late, influenced by buffer latency, but also inconsistent between bounces. That is pretty worrying!

I haven’t ran these tests myself yet (rather busy at the mo), but wonder if Nuendo has similar issues and if so, it sounds like something that needs to be fixed…

Chees,

Joe

From the page:

“DO NOT TRUST ME.”

OK.

Yes, I read that too. Ofcourse it requires more testing - however, he spent enough time on for it to be so easily dismissed.
I have noticed in my own bounces before (especially when using VI’s) that I sometimes need to do multiple exports/bounces to get good timing.
I’ll be running some tests myself this week…

Never test with Vsti’s.
Whenever there is a modulator or some random process involved, the timing will always be off.
Same for compressors, reverbs and some other plugins …

Fredo

Hi Fredo, of course…
But if I put one sample in Kontakt or even the inbuilt sampler, and trigger it multiple times, without changing anything, it should be sample accurate, right?
And if I were to put automation on it, and do a dip from 0.0db to -90db or whatever the minimal range is, and bounce it out repeatedly, it should always come out the same?
If it doesn’t, then something is definitely not working as it should…

To be clear, I’m not taking the article as gospel; but it does warrant a closer inspection.

I think -not sure- that the automation is not 100% latency compensated, but it is within reasonable specs.
We are not talking frames or so.
So, for real-life use, there is no problem. (that I am aware of)

But, to come back to the original question; I don’t think a Vsti can produce sample accurate exports.
And when you test, then you start with a solid basis, not with possible variables.
So the first thing to use for testing is a simple test tone file.

Fredo

A vsti should absolutely be able to reproduce sample accurate exports, if the DAW is sample accurate.

I appreciate an analog modelled vsti like Diva or other ‘synths’ are designed to never produce the same note twice; but a sampler that triggers one simple clap sample or even a one sample ‘spike’ without any envelopes, modifiers or lfo’s applied (on the same note) should reproduce without fail, sample accurately both in terms of content and position.

If automation isn’t sample accurate, I guess my first question would be why, but secondly, even if there is a minuscule amount of latency, it should be the same latency on multiple exports, right?
The delay worries me a bit less than the fact that in the test, multiple exports yielded different results on Cubase - that is rather odd.

Imho, as a PT-beating platform, Nuendo should be able to match PT’s accuracy.

Just a thought:
For several years now, there has been this “Nuendo plays more audio than the selected portion”-bug when you select a portion of an event with the range tool, and when you order Nuendo to play that range. Although it seems to be connected with the buffer settings and Asio-Guard, I cannot get completely rid of it…
ProTools doesn’t show this behaviour on my systems, so it seems that the way Nuendo handles audio is the culprit.

Maybe the latency we’re discussing here is somehow related to this range-playback-discrepancy?

Please refer to: Play until Selection End - actually goes further - #4 by Niekbeem - Nuendo - Steinberg Forums
And to: Selection <> Playing inconsistency - Nuendo - Steinberg Forums

Again; just a thought.
Niek/ Amsterdam.

The failure to compensate for the variable of buffer latency in Automation functions is an oversight/mistake. Nuendo is (has to be to function) aware of the latency variable introduced by buffer size changes to correctly perform a number of different processes, including the ability to record and play sample accurate audio. The display of I/O latency values in the settings page is a demonstration of that awareness and the corrections the app has to make to compensate.

So simply put, Steinberg has elected not (aka failed) to apply the same sort of compensation to Automation data that they already apply elsewhere in the DAW. On the same system with a constant buffer size, this is not an issue. In all other cases that oversight introduces a timing error. Worst case 15-20ms. That IS enough to matter in the real world to some people, including me, especially for Mute. On a conceptual level, this is a discussion that should not even be required. I believe this issue is a legacy of the days of slow, single core processors that could not bear the burden, given other priorities. We are some years past that limitation now.

This is bad.

The lack of an audio post record buffer, that can cause crossfade problems during live records punchs, is another problem that do not target most users, but has not been corrected probably because it is targeting the audio core engine at a code level that is probably very difficult to modify (my theory is that perhaps the developer initial knowledge has been lost at this level causing very difficult or very time consuming and expensive modifications with risks to introduce many regression bugs).

Nevertheless, there should be here as well no discussion about the need to solve that. I did test Protools here where there is no problem because the recording buffers are correctly implemented to allow session recordings with punch ins / punch outs and autocrossfades.

Again this is probably not something that is targeting users, because when the autocrossfade length is short (as it is in the default setting) for most recorded audio material it will not produce audible artifacts. Probably this reported automation lack of latency compensation does not target users neither.

But seen from a professional point of view, it is something that can silently destroy the acceptation and the reputation of a software, or at least keep its market share stay quite stable, or keep it in a specific market area, when it could raise and overflow in other market area. Nuendo has the potential to raise and overflow, if those things were cleaned. Hopefully i will see this raise one day, i did see great Nuendo enhancements since the beginning of its history, and it should now jump over the ditch solving definitely those hided but important core problems.

I see the recent Nuendo large price drop (40%) as a tentative to break the barrier of market share, but obviously it is probably not enough in the audio world, specially in the audio recording industry where it is very important to have a deep trust in the used software, because here artists and recording engineers are very sensitive to their takes and mix integrity.

I’ve seen mixing desk manufacturer engineers coming into the recording studio during a session for latency compensation problems on a mixing desk and a couple weeks later give us a large and invasive software modification to solve the problem. This can be seen has small problems by some, but it is definitely not something that can be hided under the carpet.

I think that if Nuendo would have succeeded in correcting those small core audio engine problems sooner, and a couple other problems mainly in the automation area, it could have raised its position in the market sooner and more easily, without needs for price drops.