Was Nuendo Ten Premature?

OMG ! The infamous internet forum disease strikes again !

Of course, I am a brainless fanboy or even worse, I work for Steinberg and earn a ton of money by making positive comment. But happily, your very very smart comparison between Nuendo and an iPhone is so helpful and so useful, it demonstrates how brilliant and clever you are. :laughing:

I have no time for this. Have a fun life.

Certainly. Every single DAW I’ve ever used, and most full releases of them, has sounded different in one way or another, be it SAW Studio, Audition, Pro Tools, Pyramix, Quartz, and so on. I can’t really remember how Nuendo 2 and 3 sounded anymore, and I didn’t even know Cubase had gotten a sibling at first release, but the first thing I reacted to when I upgraded to Nuendo 4 was how incredibly much better everything sounded, and how much easier it got to get to where I wanted. This has been true for every single release since then, and this time the level of detail and resolution totally blew me away.
I did however notice that things sounded less heavy and brighter right off the bat when imported into Nuendo 10 than how things had sounded before, and not least when comparing to the original files, as my first gig on Nuendo 10 was a clean-up mastering-gig where I was alternating between Nuendo 10 and Izotope Spectral Repair, and both Wavelab LE (have to prioritize things right now) and (especially) Sound Forge complicated things had I done that there.
This worried me that my mixes would suffer from it, but now, a few weeks in as a happy Nuendo 10 user, I can say that that was no problem! I have to deal a bit differently with EQ and filtering in N10 than I did in previous versions, but from what I can tell right now, things produced here seems to work much better on older versions of Nuendo than the other way around, which means it’s a very good thing by the looks of it right now. The jury’s still out, but right now WIH (What I Hear) when I work seems to be WIG (What I Get) when I render.

:open_mouth: Are you sure about this? Have you tried any null tests? I work a lot between Logic, Nuendo and RX and have never really noticed anything like this. Has anyone else experienced this?

Thank you!

Very comprehensive, and detailed! :slight_smile:

Aahh! Now I get you!

This reminds me of when I moved from Sonar to Sequoia. Sequoia had a different sound. As time went by I realised, that it was showing me more of the mid-range detail; which was a good thing.

Thanx for the explanation!

This worried me that my mixes would suffer from it, but now, a few weeks in as a happy Nuendo 10 user, I can say that that was no problem!

Oh Kaayy …

I have to deal a bit differently with EQ and filtering in N10 than I did in previous versions,

…understandably …

but from what I can tell right now, things produced here seems to work much better on older versions of Nuendo than the other way around, which means it’s a very good thing by the looks of it right now.

This part is a bit confusing, though.

To my knowledge there has been exactly 1 issue between versions, and it was how the EQ behaved. It was essentially a bug that could come back and ‘bite you’ when moving between versions (bug vs fixed bug) if you were unaware of it.

However, this talk about DAWs some how sounding fundamentally different as if the signal processing (not plugins) was different I find to be mythological nonsense. Heck, even most plugins act exactly the same way between versions unless clearly specified.

So I don’t believe this talk about a version sounding better than the other. Do a null test on the basic features and see for yourself.

Aye, I actually used that for effect some years ago when making strange noise-effects. The best results at that time was combining my then Nuendo 7 with Cubase SE 3. By using a short less than a frame long square-wave clip for sample-perfect sync before both renderings, and looping back into Wavelab (nothing ever sounded the same after rendering as it had during mix, and I wanted the full effect…), I got a lovely and extreme phasing usable for many weird things.
Here though, I don’t really need to null things to make sure. It’s an extremely tangible effect already on its own - as it has been every single time - and jumping back and forth between Nuendo and RX when clearing up these lavs only made that effect so much more obvious. The things I I actually did like in the sound - the firm low-mid - simply wasn’t there in Nuendo. Things are sure getting closer and closer to true transparency in Nuendo, but it’s not there yet. I remember Pyramix did, but it’s mostly hardware-based, so it makes sense (and that level of fidelity doesn’t come free), and both Wavelab and Sound Forge plays back raw files perfectly, which I actually can see visually as everything I record is coded in something that is called Verifile, which - simply put - is a way to hardware-code error detection directly into every single sample of the sound (transparently, of course), making my Titan cards light up like a Christmas-tree if there’s a dropout or something.

I’ve never actually got clear answers regarding this, but until someone who actually codes these engines can tell me differently, I’m convinced that this is an artifact from making the multi-track software-based DAWs being able to deal with downright insane amounts of data the same way for instance 3D FPS-games uses (used?) a very simplified and truncating form of calculations with total respect for the fact that CPU’s simply can’t do some calculations very effectively to be even remotely playable on regular PC’s. Which also would explain why Pyramix and Wavelab are totally transparent.
10 years ago I did a full feature-film on my old Core2Duo. 650 tracks a reel, with EQ and compressor on every track, plus several instances of full 5.1 surround Altiverb, automation, and a lot of design effects to boot. By the time, it was the heaviest project I’ve ever done (pretty tame compared to some stuff I’ve done now though), but that math doesn’t add up on its own.

Aahh! Now I get you!

This reminds me of when I moved from Sonar to Sequoia. Sequoia had a different sound. As time went by I realised, that it was showing me more of the mid-range detail; which was a good thing.

Thanx for the explanation!

but from what I can tell right now, things produced here seems to work much better on older versions of Nuendo than the other way around, which means it’s a very good thing by the looks of it right now.

This part is a bit confusing, though.

Regarding Sequoia: Totally hear you.

The confusing bits: Seeing as I have absolutely no influence over the development of these programs, and I can’t really change anything from my side, I’ve never actually dug deeper into this. I only try to find the way that best gets me where I want with the least work required, and build on stuff from there, but to cut a long story short I’ve gotten the impression that the colouration of the sound mainly is an artifact of the realtime playout first and foremost. Might be wrong of course, but that kinda makes sense when things I couldn’t hear in Nuendo 8 now sounds clear as day, meaning that EQ optimized for Nuendo 8 now sounds a wee bit weird on Nuendo 10, but EQ adapted for the much more high-resolution Nuendo 10 only makes an impression comparable to, say, jumping between the PSI-speakers and my old Yamaha when loading the very same project one step back. One can reach a whole new level of precision here that easily transfers backward even if you can’t hear the exact mechanics in the old version, the same way a great mix sounds good no matter if played on a $10.000 audiophile-system or a $1 transistor radio.


Doing some null tests before to give those kinds of remarks is absolutely mandatory.

For me it’s mythological nonsense too.

There are many discussions like this inside forums, but i’ve never seen proofs with null tests.


Too much people are loosing there time in trying to reach some kind of mythological sound quality Nirvana.

This time could be used to learn better mix technics and learn how digital audio behaves at the sampling and processing level.

This is what i will describe as the platinum or diamond ears syndrome.

I think that a mix is better and fastly done with normal ears than with platinum or diamond ears.

This syndrome was not so widespread until digital audio and computing started to deeply change our work.

It is driving a full market of expensive gear, softwares and plugins and is consuming a lot of user time.

There’s so much to say about all this, but I neither have the time nor the patience, so I’ll settle with this:
Bad hearing will lead to bad mixes, and fast doesn’t very often mean good. There is no such thing as “diamond ears”, only more or less well-trained ears. Even people with damaged hearing can often compensate in their monitoring to some degree, reaching great results by knowing what they do and why.
Regarding the quality of gear though- spending too much time on anything for no good reason is not good either, and where I’ve historically spent an insane amount of time and energy working on cheaper consumer-friendly stuff to reach results that transfer well everywhere years before, things virtually mix themselves now that I only use a much smaller but much more reliable and well-sounding collection of software and hardware.
And this is why I love Nuendo - it has always been at least one step ahead of the competition, always evolving tremendously each release. Making work much easier and reducing tweak-time a lot.
Higher sound quality=less time spent working reaching higher results than physically possible using crabby tools.

And yes - when I started out in the early/mid 90’s there were a lot of discussions among sound-people regarding the quality of stuff, pros and cons. At that time you couldn’t even really use a BASF tape on a reel-to-reel machine calibrated for AMPEX without a world of problems, but everyone knew that because if you didn’t know what tape-bias was no studio would hire you. Actually, most of us even tried as far as possible to only work with tapes from the exact same batch because every batch was slightly but still apparently different by nature. That’s the analogue way. Much easier to calibrate to a new batch than to calibrate to a new brand though when the old one ran out. Myself, it was AMPEX 456 that ran through our reels, both 2" 24ch and 1/4" 2 ch.

“The good old days.” :slight_smile:

What year is this? These discussions were worn out long ago.

In the early years, some DAWs used to sound slightly different. Due to either implementation of proprietary hardware (Pyramix, Paris, Fairlight) or poor coding or hardware design practices (mostly early Pro Tools). For instance, Pro Tools "Mix24 " was broken from the beginning with 56 bit internal processing linked in hardware by 24 bit interconnect stages that could cascade to exceed 0dBFS when levels should have remained below clipping in theory. There are other examples of variations in DAW results for various reasons. Google the “Awesome DAWsum” test just for fun.

Those days have passed. There may be some DAW or other that has flaws in it but as a rule, they all sound the same. The ease and intelligence of the workflow design has more to do with results now than any other factor. And one size does not fit all there.

Our great American poet, Michael McDonald said it best long ago: What a fool believes, he sees, no wise man has the power to reason away…

I like Nuendo. Because of the workflow.

Hahaha, yeah :slight_smile:
I was pretty young then and not very experienced yet, but I remember how people were talking about the digital revolution as something that would save us from all the problems of analogue gear. I don’t know who you are and have no clue if you maybe even was doing audio in the 90’s yourself, but suffice it to say we in the studio I was working at spent essentially as much time fixing the beep electronics as recording bands. For instance, I had a lot more fun when we changed every single pot of a huge 32 channel D&R console than my boss did, who had done that before…
Didn’t quite pan out that way though, we were changing one set of problems for a totally new set.
Still, there’s no denying things are way more painless and efficient now, and the things I do on a daily basis now as a movies sound designer and re-dubbing mixer was pure science fiction just a few years ago :slight_smile:

Obviously. There’s a lot of defensiveness about how people wants things to be, but we live in a world of limitations, and that means having to work with the limitations and doing the best out of it. It’s not flaws or bad programming, it’s a compromise between performance and intended use. Until someone invents quantum, computing you can’t really have both on a large project. This is very very obvious in the world of games that can perform much more impressive stuff than the actual computer could ever do itself with regular algorithms, that’s why we got games like Wolfenstein, Doom, and Quake, working on regular PCs the first place.Same here.
What makes these discussions so painful is that the defensive people refuses to use their ears and only cites sources that agrees with them. That is really really annoying and non-constructive.
Myself I did choose Nuendo when I changed from music to movies because it was hands down the best at what it did without me having to spend the cash of a new car. And I remained with Nuendo because it is genuinely a really good program no matter price-range, and the difference between hardware-based DAWs and pure software is slowly fading out. But we’re not there yet.

I can’t imagine what Steinberg would have changed that would affect the tonal quality of unprocessed audio.

Are you sure this isn’t just, say, an update to the EQ plugin or something?

Only they can say for sure, but my short and pretty wide answer is the audio engine. Simply put I know the effect but I don’t know the exact mechanisms in black and white, only what I’ve pieced together from empirical tests and similar solutions from other fields in computer technology, mostly 3D graphics and free-roaming world-building. These things have an impressive lot in common although they might appear very different. An awful lot regarding effective programming is about cheating the hardware as much as humanly possible, and every cheat will have a trade-off of various degrees of acceptable. If you wanna see some seriously impressive works there, then the Amiga demo-scene from the mid 80’s to the early 90’s was downright bonkers in finding shortcuts to the best effects on the limitations of 500kb memory and a single DD disc to distribute it on.

I think it’s pretty safe to say that no DAW ever has actually wanted to affect the sound through the engine, but that it’s simply an unwanted side-effect that they’ve had to mask as well as possible. Some do it better than others, suffice it to say I’m not friends with the character of Audition for instance. Too dull and bassy. Which means it’s great for checking mixes in, which is what I use it for.

The typical error when stressing a DAW too much is dropouts or glitches due to a buffer that empties. There isn’t some sort of gradual degradation of the audio in order to avoid that.

And if you think it’s a static issue due to CPU demands then you’re really talking about limiting the resolution either on one ‘axis’ by limiting dynamic range or the other by limiting bandwidth - i.e. you’d have to program the DAW to lower the sample rate or decrease the bit depth to reduce CPU demands. That’s a ridiculous suggestion. And why those items specifically? Because if you’re talking about the basic audio engine then it’s just very, very simple math we’re dealing with. It’s nothing fancy. It really isn’t. Wanna double something? Multiply by 2. Done. Cut it in half? Multiply by 0.5. That’s it.

So how can you make X * 0.5 any less demanding on the CPU? You can’t change the math because all it is is multiplication. So the only thing that’s possible is making X a smaller number to begin with - i.e. reducing bit depth. Oh, well I suppose the other way of doing it is to do fewer calculations per second - i.e. lowering sample rate.

If you really think that any of the major companies are doing this then you’re truly misled.

Numerous tests have been done on the audio engines of the major DAWs and guess what - they all null to zero or into the dithered noise floor. I.e. they do the math the same way minus pure noise that we can’t hear anyway.

I find it simply unbelievable that people still entertain this silly notion in 2019…

I was deeply involved with hands on testing of the audio/summing engines of every DAW then available about 15 years ago. It was done in a public forum and subject to scrutiny by pretty much everyone on the internet.

I used my ears. Apparently they work pretty well. I could spot both Pyramix end EMU Paris 10 out of 10 times in listening tests but everything else sounded the same, being fully software-based. The two I could spot, coincidentally, were the only two that were proprietary DSP based. That seemed to make my ability to spot them seem much more logical. Not everyone could hear the difference but some of us could in controlled testing.

So here is an opportunity for your ear usage.I’m going to print the same mix in both Nuendo 8 and Nuendo 10, after muting any variable effects like reverb, etc. I will mute the vocals since this will have been a commercially released project. I will then post a link to them both as full WAV files and you can tell us which one sounds like Nuendo 8 and which one sounds like Nuendo 10. It shouldn’t be hard to spot, given the difference in the bass, midrange and high frequencies due to Steinberg apparently making covert changes in the audio engine.

I’ll get to that in the next couple of days. Unless Steinberg shuts this down first which would not surprise me. However, I hope they don’t because I’m feeling strangely invigorated by getting into my first pencil head internet audio debate in years. Surprisingly, it’s kind of fun! And I’m not mad at all I just find this fascinating in a retro sort of way.


You might wanna read up on how a CPU deals with mathematics.

That is simply a common lie. Noone has even been able to point to these tests when asked about it, just saying “google it”.
The most face-palm-inducing thing on the exact same subject a random forum-dweller ever claimed was that scientists had proven that all EQs would null out in a test. That was grade A bollocks and never happened.
Still, if my theory is correct than this is mainly an artifact of the realtime playback as far as I can tell. To truly and actually test these things one must record the raw output and sync it to pulse-waves, and that applies no matter if the colouration is in the playback engine or deeper.
For me that is really simple though: it the Prism Verifile-checker (a hardware-part of the output of my interfaces) can’t even lock onto the error-detection coded by my input converters in playback than it’s not transparent. It does lock onto the signal passing through the inputs of Nuendo though as long as level is unity and no processing is added, and Wavelab has been 100% transparent for as long as I’ve been able to tell.

I find it simply unbelievable that noone seems to actually listen to what they do.

Myself I feel that I’ve said what I have to say and that the whole discussion has taken far too much focus from the original question “Is Nuendo 10 a good upgrade?”
And to the question, my answer is hands down “best upgrade to date and it’s making me very happy, although with some room for improvements”.
I’ll write a reply to OP strictly regarding his or her original question after this one and then I feel I’m pretty much good here. Noone brings anything but opinions to the table and I’ve already repeated myself enough already. Will still keep an eye on the thread of course, but as it is now it’s not a very meaningful thread for me to participate further in.
I’ve always kinda wondered why developers never ever join these discussions themselves, they are the only ones who could truly give the full facts here, but this thread very much tells that it’s probably because people are really poor at handling the concept of “acceptable compromises”. And that applies to a lot of discussions regretfully.

But as a reply to your post here, I do this for a living, so I put my ears to the test every day. As mentioned before, I believe that this is first and foremost an issue with the realtime playback engine, so to truly test this you’ll have to record the output from the live project and compare from there. The effects seems more obvious the more you’ll have to process the material, a great example was a few years ago when I had a running gig mixing theatrical trailers for two major studios. For the most part all went on very well with virtually never any real issues whatsoever, except a few times when the material I got had been ripped from a really uncooperative source. It sounded good in VLC, Mediaplayer, and Wavelab, but it was completely impossible to work with in Nuendo 7. It was all mud. It was pretty obvious that this was just a problem with the sound happening to be too much in the…let’s call it “dead spot” in lack of a better word, but it was a very tangible effect of this all.
Here though that wasn’t much of a problem. If I remember correctly it was already nicely processed so I just let it sit as is in the mix and it came out just fine on the other end. Which also tells me the issue is not in the lower levels.