Was Nuendo Ten Premature?

Only they can say for sure, but my short and pretty wide answer is the audio engine. Simply put I know the effect but I don’t know the exact mechanisms in black and white, only what I’ve pieced together from empirical tests and similar solutions from other fields in computer technology, mostly 3D graphics and free-roaming world-building. These things have an impressive lot in common although they might appear very different. An awful lot regarding effective programming is about cheating the hardware as much as humanly possible, and every cheat will have a trade-off of various degrees of acceptable. If you wanna see some seriously impressive works there, then the Amiga demo-scene from the mid 80’s to the early 90’s was downright bonkers in finding shortcuts to the best effects on the limitations of 500kb memory and a single DD disc to distribute it on.

I think it’s pretty safe to say that no DAW ever has actually wanted to affect the sound through the engine, but that it’s simply an unwanted side-effect that they’ve had to mask as well as possible. Some do it better than others, suffice it to say I’m not friends with the character of Audition for instance. Too dull and bassy. Which means it’s great for checking mixes in, which is what I use it for.

The typical error when stressing a DAW too much is dropouts or glitches due to a buffer that empties. There isn’t some sort of gradual degradation of the audio in order to avoid that.

And if you think it’s a static issue due to CPU demands then you’re really talking about limiting the resolution either on one ‘axis’ by limiting dynamic range or the other by limiting bandwidth - i.e. you’d have to program the DAW to lower the sample rate or decrease the bit depth to reduce CPU demands. That’s a ridiculous suggestion. And why those items specifically? Because if you’re talking about the basic audio engine then it’s just very, very simple math we’re dealing with. It’s nothing fancy. It really isn’t. Wanna double something? Multiply by 2. Done. Cut it in half? Multiply by 0.5. That’s it.

So how can you make X * 0.5 any less demanding on the CPU? You can’t change the math because all it is is multiplication. So the only thing that’s possible is making X a smaller number to begin with - i.e. reducing bit depth. Oh, well I suppose the other way of doing it is to do fewer calculations per second - i.e. lowering sample rate.

If you really think that any of the major companies are doing this then you’re truly misled.

Numerous tests have been done on the audio engines of the major DAWs and guess what - they all null to zero or into the dithered noise floor. I.e. they do the math the same way minus pure noise that we can’t hear anyway.

I find it simply unbelievable that people still entertain this silly notion in 2019…

I was deeply involved with hands on testing of the audio/summing engines of every DAW then available about 15 years ago. It was done in a public forum and subject to scrutiny by pretty much everyone on the internet.

I used my ears. Apparently they work pretty well. I could spot both Pyramix end EMU Paris 10 out of 10 times in listening tests but everything else sounded the same, being fully software-based. The two I could spot, coincidentally, were the only two that were proprietary DSP based. That seemed to make my ability to spot them seem much more logical. Not everyone could hear the difference but some of us could in controlled testing.

So here is an opportunity for your ear usage.I’m going to print the same mix in both Nuendo 8 and Nuendo 10, after muting any variable effects like reverb, etc. I will mute the vocals since this will have been a commercially released project. I will then post a link to them both as full WAV files and you can tell us which one sounds like Nuendo 8 and which one sounds like Nuendo 10. It shouldn’t be hard to spot, given the difference in the bass, midrange and high frequencies due to Steinberg apparently making covert changes in the audio engine.

I’ll get to that in the next couple of days. Unless Steinberg shuts this down first which would not surprise me. However, I hope they don’t because I’m feeling strangely invigorated by getting into my first pencil head internet audio debate in years. Surprisingly, it’s kind of fun! And I’m not mad at all I just find this fascinating in a retro sort of way.

Peace.

You might wanna read up on how a CPU deals with mathematics.



That is simply a common lie. Noone has even been able to point to these tests when asked about it, just saying “google it”.
The most face-palm-inducing thing on the exact same subject a random forum-dweller ever claimed was that scientists had proven that all EQs would null out in a test. That was grade A bollocks and never happened.
Still, if my theory is correct than this is mainly an artifact of the realtime playback as far as I can tell. To truly and actually test these things one must record the raw output and sync it to pulse-waves, and that applies no matter if the colouration is in the playback engine or deeper.
For me that is really simple though: it the Prism Verifile-checker (a hardware-part of the output of my interfaces) can’t even lock onto the error-detection coded by my input converters in playback than it’s not transparent. It does lock onto the signal passing through the inputs of Nuendo though as long as level is unity and no processing is added, and Wavelab has been 100% transparent for as long as I’ve been able to tell.



I find it simply unbelievable that noone seems to actually listen to what they do.

Myself I feel that I’ve said what I have to say and that the whole discussion has taken far too much focus from the original question “Is Nuendo 10 a good upgrade?”
And to the question, my answer is hands down “best upgrade to date and it’s making me very happy, although with some room for improvements”.
I’ll write a reply to OP strictly regarding his or her original question after this one and then I feel I’m pretty much good here. Noone brings anything but opinions to the table and I’ve already repeated myself enough already. Will still keep an eye on the thread of course, but as it is now it’s not a very meaningful thread for me to participate further in.
I’ve always kinda wondered why developers never ever join these discussions themselves, they are the only ones who could truly give the full facts here, but this thread very much tells that it’s probably because people are really poor at handling the concept of “acceptable compromises”. And that applies to a lot of discussions regretfully.

But as a reply to your post here, I do this for a living, so I put my ears to the test every day. As mentioned before, I believe that this is first and foremost an issue with the realtime playback engine, so to truly test this you’ll have to record the output from the live project and compare from there. The effects seems more obvious the more you’ll have to process the material, a great example was a few years ago when I had a running gig mixing theatrical trailers for two major studios. For the most part all went on very well with virtually never any real issues whatsoever, except a few times when the material I got had been ripped from a really uncooperative source. It sounded good in VLC, Mediaplayer, and Wavelab, but it was completely impossible to work with in Nuendo 7. It was all mud. It was pretty obvious that this was just a problem with the sound happening to be too much in the…let’s call it “dead spot” in lack of a better word, but it was a very tangible effect of this all.
Here though that wasn’t much of a problem. If I remember correctly it was already nicely processed so I just let it sit as is in the mix and it came out just fine on the other end. Which also tells me the issue is not in the lower levels.

Regretfully my time for interwebz is about to run out, but to return to your original question:
Yes, I think this is an amazing upgrade! They have made some decisions I don’t quite agree with - like removing the audio-tab from the right-click menu for instance - , some decisions I had to figure out but very much appreciate - like moving the colours-editor to the top menus instead of a button - , and one dropped ball in still not having fixed the old issues with automation trimming, but as a whole it has made my life a whole lot easier, and contains a lot of really really nice things.
Regarding if it’s as great an upgrade for music as it is for post, then I can’t say for sure. I left the music industry completely a decade and a half ago in favour of movies and games, and only do music for fun nowadays, an Industrial Ethno-project I’ve been running since the mid 90’s, but…yeah, I’d still upgrade. No doubt. No clue if you’d consider it as worth the upgrade as I do or not, I know nothing about your needs right now so it’s hard to give actual advice here, but I do heartily recommend it.

I do hope this conversation doesn’t end up going back to arguments I saw on pro audio forums of the early 2000s where people would say stuff like different hard drives had a different ‘sound’, and even if two DAWs produced a mathematically identical audio file, some people could tell that they sounded different and there’s more to it than the maths or some such other nonsense, because that shit was really tedious.

Truth. Gold plated TOS-link-contacts is another good one.

The CPU deals with mathematics the way the CPU deals with mathematics. The way the CPU deals with mathematics is common for all DAWs. Therefore in order for what you say is happening Steinberg would have to be able to change “how a CPU deals with mathematics”. I doubt that’s happening.

Saying that noone has done that probably just means that you haven’t seen it. We had this debate like a decade or two ago and Lynn Fuston created a large test of DAW summing, i.e. testing the audio engine. It included a bunch of different DAWs and what people said they hear wasn’t what they heard.

Go get the test files (“Awesome Dawsum”) and check them out yourself if you want. Or recreate the test, it’s not that hard.

What does that have to do with the “audio engine”? I have no idea who this person was or why we should all of a sudden care about that. This isn’t what we were talking about.

What does “sync” “the raw output” “to pulse-waves” mean in practical terms?

I don’t think you understand the details of Verifile. Just reading the manual it clearly states several key issues:

  1. "the computer’s operating system is not designed that way, and (even if optimally configured, which they seldom are) it will, now and again, interrupt audio recording to do something which seems more important at the time. This is especially true when dealing with many channels of high resolution audio, perhaps with low latency, which needs a continuous high data throughput. The result is usually a recorded ‘dropout’ of some kind: anyone who has recorded audio on a computer is familiar with the manifestations: these include repeated or missed samples or entire sections, random clicks, pops - even channel swapping. "

Exactly what I was saying. Hearing a slight shift in timbre is not what the effect would be from the above errors. Clicks and pops yes. “Warmer”, no.

  1. “Verifile is a ‘fragile steganographic’ process which embeds derivative data within the dither of the ADC,”

and therefore

  1. “, Verifile is not designed to survive any audio processing or encoding process”

What this means is that you can’t use Verifile to judge whether or not a DAW is sonically transparent, because as soon as you process audio or apply dither it won’t give you answers that show you anything of value in this context.



First of all, who says I or other people with my opinion aren’t listening to what we do?

Secondly, “confirmation bias” is actually a thing. So judging from all the testing I’ve done as an engineer, and from all the papers I’ve read, and from all my actual mixing I’ve done, what is most probable is that you’re either hearing differences that aren’t there or are attributing actual differences to things that aren’t their cause.

I’ve already said all there is to say about the transparency and will not comment more on that, but the Verifile is a system intended to make sure that an audio stream is completely bit-perfect from the coding down to the output. Every single sample gets its own check-code, and the very same up to single 192000’th part of a second the decoder gets wind of it, it will begin to monitor it. If an unprocessed file recorded through Prism-converters with the Verifile-coder active is played in a DAW and the Verifile decoder doesn’t react to it, then it is not bit transparent.

I’m totally laughing over here and not at anybody just at the internet in general. I’m not mad about anything and everyone deserves a right to their own opinion. Even if it’s wrong :slight_smile:.

But what blows my mind is I did not realize the Steinberg Forum actually came with a Time-Machine option! Somehow, this forum has rolled time back 20 years and I have to say it’s kind of fun.

And now, to pretend to get back on topic, I like Nuendo 10. I think it’s real good.

lol…

And yes, I think Nuendo 10 looks great!

That’s usually how these things go.

I understand that. But you keep missing what the point of that system is and what errors you would get if you tried to use it on an actual mix.

Normally you would dither the output at some point and that dither will mess with Verifile. So you can’t use it assuming you’re actually processing.

The errors it’s made to detect are typically ones that don’t create the sound you’re talking about.

The fact that you’re not rethinking this just shows that nothing will change your mind, and that’s exactly why I brought up confirmation bias. If you were open minded and actually cared about technology and what the actual truth was then your response wouldn’t have been to double-down on what Verifile does.

Just think about it.

PS: I actually don’t care what anyone thinks about it and how anyone deals with it in terms of gear, but I do care about what (dis)information people spread in different forums.

As a service audio engineer working and measuring during years on digital audio hardware and previously on analog hardware, analog tape recorders settings and analog mixing desks controls and repairs, discussing with quite many people in this field from end users to senior engineers at some well known manufacturers and recording studios, reading quite a lot of forum discussions, sometimes more or less strange, i think that i have a good understanding of digital audio from the lowest levels (my thesis was talking about quality of the digital audio to analog conversion process), processing, and how things are differently perceived by audio technicians, audio engineers and musicians or people with less technical background.

I remember a few discussions with some high fidelity enthusiasts, where it was absolutely not possible to discard them from some of their non sense mythological beliefs.

This is quite funny to see that those people with “diamond ears”, sometimes, do not even ear that one of their speaker driver is dead :smiley:, or that there is a very big acoustical standing waves problem in their listening room. :laughing:

Interestingly, those “audio religious” peoples are less presents inside recording studio or broadcast facilities, probably because here there is a higher technological background available, and because here money need to be spent for a good reason.

My conclusion is that :

It is not necessary to trig some hard conflicts with peoples that do not talk the same language, believe in mythological things, or do not have enough technical background to exchange at this level. After all audio is for music too, and music is mostly creativity and less technics.

It seems that those people for a reason or another needs to believe in their myths. Something that is a myth for us scientists is sometimes not a myth for them, and for sure that does help them to be happy with their work or even with their existence.

It is not fair to destroy their beliefs, because after all they need it. The same apply for religions.

I did stop a long time ago to fight against them, i’m finally happy for them if they do find some answers through something that could be named “happy myths” by us scientists and technicians.

But i do not hesitate to explain them why i think they are wrong in their beliefs, if they are interested in listening to me.

In the other hand, they are generally fair enough to not destruct the serious work that has been done by scientists about the technology they are using.

If they spread some strange or really corrupt information, specially trying to influence or destroy the market for a specific product, then it is another story.

Thank you for your responses!
I am a music person, who hopes to grow into post-production eventually.

I work shoot videos with the ‘phone and compose music to it in Pro Tools at the moment, and when I heard about Nuendo’s post-pro applications, I bought it. When I did my first such project in Nuendo, and discovered the export process, I was sorely disappointed, hence my current stand.

While I have only “touched”, not “scratched” the surface of Nuendo’s post-pro capabilities, the video export functionality is THE deal-breaker, for me. I’m probably being (too) harsh, but that’s where I am now.

De nada, glad to be of help!
It sounds like you are a very creative person who certainly seems to have the ear for it, and I can very much recommend trying this trade out! You have one hell of a ride to look forward to if you do, and I mean that in the good way! I haven’t looked back for a second since abandoning the music industry completely in favour of movies and games 15 years ago, and for my own personal projects and experiments in music Nuendo still rocks :slight_smile:
I have a sneaking suspicion you might find quite a few things really worthwhile right off the bat in Nuendo 10, I sure did, but I hear what you’re saying about the video rendering function. It is very simple right now, but I must admit this brings me a lot of hope and anticipation for what’s coming!I’ve been using Nuendo since version 2, and there has been several features added during these years that initially started out pretty sparsely featured, but grown into really amazing stuff. The video-engine is a great example- it actually did give me a few headaches as my old Matrox-stuff didn’t work anymore when the Direct Show-support disappeared, and just good riddance to the Quicktime engine full stop, but what I gained from the new engine introduced in…7? Something like that. -More than made up for that. And truth be told quickly nothing of all this was a problem anymore when I bought Black Magic-cards instead for the play-out anyway, and that card has remained with me since Nuendo 5.
I feel the same way here and now: there are no settings whatsoever, and I’m not a huge fan of H.264, but the promises this hints of about the future will save me a downright monumental amount of time, and even now in this primitive state I can easily send WIP to my clients with a simple press of a button!
Not really sure what it might be that you dislike about the export otherwise, but maybe it’s something that just takes a little getting used to?

It the whole idea of having to leave Nuendo to do something that I am used to doing totally inside another DAW.

(Mental block?)

As I grow into post-pro, and learn and appreciate the more advanced processes that Nuendo supports, I may be able to overlook it.

I’m guessing they’re working on extending video export support, and it sure would be nice to have.

Davinci Resolve though does a great job at exporting, and it support batch exports by loading them into a list of renders to execute. It’s obviously more clunky but on the bright side you then have access to pretty much all formats, resolutions, bit rates etc. Yes, it’ll take more time to line stuff up but once you get the hang of that you can go through it relatively quickly.

I agree though that video exports should improve.



I’ll be here … :wink:

Davinci Resolve though does a great job at exporting, and it support batch exports by loading them into a list of renders to execute.

Wonderful application, if only it would stop crashing. I don’t know if it’s my computer, or what, but Resolve requires lots of resolve to get past the crashes.

… you then have access to pretty much all formats, resolutions, bit rates etc.

Yes.