To my knowledge there has been exactly 1 issue between versions, and it was how the EQ behaved. It was essentially a bug that could come back and ‘bite you’ when moving between versions (bug vs fixed bug) if you were unaware of it.
However, this talk about DAWs some how sounding fundamentally different as if the signal processing (not plugins) was different I find to be mythological nonsense. Heck, even most plugins act exactly the same way between versions unless clearly specified.
So I don’t believe this talk about a version sounding better than the other. Do a null test on the basic features and see for yourself.
Aye, I actually used that for effect some years ago when making strange noise-effects. The best results at that time was combining my then Nuendo 7 with Cubase SE 3. By using a short less than a frame long square-wave clip for sample-perfect sync before both renderings, and looping back into Wavelab (nothing ever sounded the same after rendering as it had during mix, and I wanted the full effect…), I got a lovely and extreme phasing usable for many weird things.
Here though, I don’t really need to null things to make sure. It’s an extremely tangible effect already on its own - as it has been every single time - and jumping back and forth between Nuendo and RX when clearing up these lavs only made that effect so much more obvious. The things I I actually did like in the sound - the firm low-mid - simply wasn’t there in Nuendo. Things are sure getting closer and closer to true transparency in Nuendo, but it’s not there yet. I remember Pyramix did, but it’s mostly hardware-based, so it makes sense (and that level of fidelity doesn’t come free), and both Wavelab and Sound Forge plays back raw files perfectly, which I actually can see visually as everything I record is coded in something that is called Verifile, which - simply put - is a way to hardware-code error detection directly into every single sample of the sound (transparently, of course), making my Titan cards light up like a Christmas-tree if there’s a dropout or something.
I’ve never actually got clear answers regarding this, but until someone who actually codes these engines can tell me differently, I’m convinced that this is an artifact from making the multi-track software-based DAWs being able to deal with downright insane amounts of data the same way for instance 3D FPS-games uses (used?) a very simplified and truncating form of calculations with total respect for the fact that CPU’s simply can’t do some calculations very effectively to be even remotely playable on regular PC’s. Which also would explain why Pyramix and Wavelab are totally transparent.
10 years ago I did a full feature-film on my old Core2Duo. 650 tracks a reel, with EQ and compressor on every track, plus several instances of full 5.1 surround Altiverb, automation, and a lot of design effects to boot. By the time, it was the heaviest project I’ve ever done (pretty tame compared to some stuff I’ve done now though), but that math doesn’t add up on its own.
This reminds me of when I moved from Sonar to Sequoia. Sequoia had a different sound. As time went by I realised, that it was showing me more of the mid-range detail; which was a good thing.
Thanx for the explanation!
but from what I can tell right now, things produced here seems to work much better on older versions of Nuendo than the other way around, which means it’s a very good thing by the looks of it right now.
This part is a bit confusing, though.
Regarding Sequoia: Totally hear you.
The confusing bits: Seeing as I have absolutely no influence over the development of these programs, and I can’t really change anything from my side, I’ve never actually dug deeper into this. I only try to find the way that best gets me where I want with the least work required, and build on stuff from there, but to cut a long story short I’ve gotten the impression that the colouration of the sound mainly is an artifact of the realtime playout first and foremost. Might be wrong of course, but that kinda makes sense when things I couldn’t hear in Nuendo 8 now sounds clear as day, meaning that EQ optimized for Nuendo 8 now sounds a wee bit weird on Nuendo 10, but EQ adapted for the much more high-resolution Nuendo 10 only makes an impression comparable to, say, jumping between the PSI-speakers and my old Yamaha when loading the very same project one step back. One can reach a whole new level of precision here that easily transfers backward even if you can’t hear the exact mechanics in the old version, the same way a great mix sounds good no matter if played on a $10.000 audiophile-system or a $1 transistor radio.
There’s so much to say about all this, but I neither have the time nor the patience, so I’ll settle with this:
Bad hearing will lead to bad mixes, and fast doesn’t very often mean good. There is no such thing as “diamond ears”, only more or less well-trained ears. Even people with damaged hearing can often compensate in their monitoring to some degree, reaching great results by knowing what they do and why.
Regarding the quality of gear though- spending too much time on anything for no good reason is not good either, and where I’ve historically spent an insane amount of time and energy working on cheaper consumer-friendly stuff to reach results that transfer well everywhere years before, things virtually mix themselves now that I only use a much smaller but much more reliable and well-sounding collection of software and hardware.
And this is why I love Nuendo - it has always been at least one step ahead of the competition, always evolving tremendously each release. Making work much easier and reducing tweak-time a lot.
Higher sound quality=less time spent working reaching higher results than physically possible using crabby tools.
And yes - when I started out in the early/mid 90’s there were a lot of discussions among sound-people regarding the quality of stuff, pros and cons. At that time you couldn’t even really use a BASF tape on a reel-to-reel machine calibrated for AMPEX without a world of problems, but everyone knew that because if you didn’t know what tape-bias was no studio would hire you. Actually, most of us even tried as far as possible to only work with tapes from the exact same batch because every batch was slightly but still apparently different by nature. That’s the analogue way. Much easier to calibrate to a new batch than to calibrate to a new brand though when the old one ran out. Myself, it was AMPEX 456 that ran through our reels, both 2" 24ch and 1/4" 2 ch.
What year is this? These discussions were worn out long ago.
In the early years, some DAWs used to sound slightly different. Due to either implementation of proprietary hardware (Pyramix, Paris, Fairlight) or poor coding or hardware design practices (mostly early Pro Tools). For instance, Pro Tools "Mix24 " was broken from the beginning with 56 bit internal processing linked in hardware by 24 bit interconnect stages that could cascade to exceed 0dBFS when levels should have remained below clipping in theory. There are other examples of variations in DAW results for various reasons. Google the “Awesome DAWsum” test just for fun.
Those days have passed. There may be some DAW or other that has flaws in it but as a rule, they all sound the same. The ease and intelligence of the workflow design has more to do with results now than any other factor. And one size does not fit all there.
Our great American poet, Michael McDonald said it best long ago: What a fool believes, he sees, no wise man has the power to reason away…
I was pretty young then and not very experienced yet, but I remember how people were talking about the digital revolution as something that would save us from all the problems of analogue gear. I don’t know who you are and have no clue if you maybe even was doing audio in the 90’s yourself, but suffice it to say we in the studio I was working at spent essentially as much time fixing the beep electronics as recording bands. For instance, I had a lot more fun when we changed every single pot of a huge 32 channel D&R console than my boss did, who had done that before…
Didn’t quite pan out that way though, we were changing one set of problems for a totally new set.
Still, there’s no denying things are way more painless and efficient now, and the things I do on a daily basis now as a movies sound designer and re-dubbing mixer was pure science fiction just a few years ago
Obviously. There’s a lot of defensiveness about how people wants things to be, but we live in a world of limitations, and that means having to work with the limitations and doing the best out of it. It’s not flaws or bad programming, it’s a compromise between performance and intended use. Until someone invents quantum, computing you can’t really have both on a large project. This is very very obvious in the world of games that can perform much more impressive stuff than the actual computer could ever do itself with regular algorithms, that’s why we got games like Wolfenstein, Doom, and Quake, working on regular PCs the first place.Same here.
What makes these discussions so painful is that the defensive people refuses to use their ears and only cites sources that agrees with them. That is really really annoying and non-constructive.
Myself I did choose Nuendo when I changed from music to movies because it was hands down the best at what it did without me having to spend the cash of a new car. And I remained with Nuendo because it is genuinely a really good program no matter price-range, and the difference between hardware-based DAWs and pure software is slowly fading out. But we’re not there yet.
Only they can say for sure, but my short and pretty wide answer is the audio engine. Simply put I know the effect but I don’t know the exact mechanisms in black and white, only what I’ve pieced together from empirical tests and similar solutions from other fields in computer technology, mostly 3D graphics and free-roaming world-building. These things have an impressive lot in common although they might appear very different. An awful lot regarding effective programming is about cheating the hardware as much as humanly possible, and every cheat will have a trade-off of various degrees of acceptable. If you wanna see some seriously impressive works there, then the Amiga demo-scene from the mid 80’s to the early 90’s was downright bonkers in finding shortcuts to the best effects on the limitations of 500kb memory and a single DD disc to distribute it on.
I think it’s pretty safe to say that no DAW ever has actually wanted to affect the sound through the engine, but that it’s simply an unwanted side-effect that they’ve had to mask as well as possible. Some do it better than others, suffice it to say I’m not friends with the character of Audition for instance. Too dull and bassy. Which means it’s great for checking mixes in, which is what I use it for.
The typical error when stressing a DAW too much is dropouts or glitches due to a buffer that empties. There isn’t some sort of gradual degradation of the audio in order to avoid that.
And if you think it’s a static issue due to CPU demands then you’re really talking about limiting the resolution either on one ‘axis’ by limiting dynamic range or the other by limiting bandwidth - i.e. you’d have to program the DAW to lower the sample rate or decrease the bit depth to reduce CPU demands. That’s a ridiculous suggestion. And why those items specifically? Because if you’re talking about the basic audio engine then it’s just very, very simple math we’re dealing with. It’s nothing fancy. It really isn’t. Wanna double something? Multiply by 2. Done. Cut it in half? Multiply by 0.5. That’s it.
So how can you make X * 0.5 any less demanding on the CPU? You can’t change the math because all it is is multiplication. So the only thing that’s possible is making X a smaller number to begin with - i.e. reducing bit depth. Oh, well I suppose the other way of doing it is to do fewer calculations per second - i.e. lowering sample rate.
If you really think that any of the major companies are doing this then you’re truly misled.
Numerous tests have been done on the audio engines of the major DAWs and guess what - they all null to zero or into the dithered noise floor. I.e. they do the math the same way minus pure noise that we can’t hear anyway.
I find it simply unbelievable that people still entertain this silly notion in 2019…
I was deeply involved with hands on testing of the audio/summing engines of every DAW then available about 15 years ago. It was done in a public forum and subject to scrutiny by pretty much everyone on the internet.
I used my ears. Apparently they work pretty well. I could spot both Pyramix end EMU Paris 10 out of 10 times in listening tests but everything else sounded the same, being fully software-based. The two I could spot, coincidentally, were the only two that were proprietary DSP based. That seemed to make my ability to spot them seem much more logical. Not everyone could hear the difference but some of us could in controlled testing.
So here is an opportunity for your ear usage.I’m going to print the same mix in both Nuendo 8 and Nuendo 10, after muting any variable effects like reverb, etc. I will mute the vocals since this will have been a commercially released project. I will then post a link to them both as full WAV files and you can tell us which one sounds like Nuendo 8 and which one sounds like Nuendo 10. It shouldn’t be hard to spot, given the difference in the bass, midrange and high frequencies due to Steinberg apparently making covert changes in the audio engine.
I’ll get to that in the next couple of days. Unless Steinberg shuts this down first which would not surprise me. However, I hope they don’t because I’m feeling strangely invigorated by getting into my first pencil head internet audio debate in years. Surprisingly, it’s kind of fun! And I’m not mad at all I just find this fascinating in a retro sort of way.
You might wanna read up on how a CPU deals with mathematics.
That is simply a common lie. Noone has even been able to point to these tests when asked about it, just saying “google it”.
The most face-palm-inducing thing on the exact same subject a random forum-dweller ever claimed was that scientists had proven that all EQs would null out in a test. That was grade A bollocks and never happened.
Still, if my theory is correct than this is mainly an artifact of the realtime playback as far as I can tell. To truly and actually test these things one must record the raw output and sync it to pulse-waves, and that applies no matter if the colouration is in the playback engine or deeper.
For me that is really simple though: it the Prism Verifile-checker (a hardware-part of the output of my interfaces) can’t even lock onto the error-detection coded by my input converters in playback than it’s not transparent. It does lock onto the signal passing through the inputs of Nuendo though as long as level is unity and no processing is added, and Wavelab has been 100% transparent for as long as I’ve been able to tell.
I find it simply unbelievable that noone seems to actually listen to what they do.
Myself I feel that I’ve said what I have to say and that the whole discussion has taken far too much focus from the original question “Is Nuendo 10 a good upgrade?”
And to the question, my answer is hands down “best upgrade to date and it’s making me very happy, although with some room for improvements”.
I’ll write a reply to OP strictly regarding his or her original question after this one and then I feel I’m pretty much good here. Noone brings anything but opinions to the table and I’ve already repeated myself enough already. Will still keep an eye on the thread of course, but as it is now it’s not a very meaningful thread for me to participate further in.
I’ve always kinda wondered why developers never ever join these discussions themselves, they are the only ones who could truly give the full facts here, but this thread very much tells that it’s probably because people are really poor at handling the concept of “acceptable compromises”. And that applies to a lot of discussions regretfully.
But as a reply to your post here, I do this for a living, so I put my ears to the test every day. As mentioned before, I believe that this is first and foremost an issue with the realtime playback engine, so to truly test this you’ll have to record the output from the live project and compare from there. The effects seems more obvious the more you’ll have to process the material, a great example was a few years ago when I had a running gig mixing theatrical trailers for two major studios. For the most part all went on very well with virtually never any real issues whatsoever, except a few times when the material I got had been ripped from a really uncooperative source. It sounded good in VLC, Mediaplayer, and Wavelab, but it was completely impossible to work with in Nuendo 7. It was all mud. It was pretty obvious that this was just a problem with the sound happening to be too much in the…let’s call it “dead spot” in lack of a better word, but it was a very tangible effect of this all.
Here though that wasn’t much of a problem. If I remember correctly it was already nicely processed so I just let it sit as is in the mix and it came out just fine on the other end. Which also tells me the issue is not in the lower levels.
Regretfully my time for interwebz is about to run out, but to return to your original question:
Yes, I think this is an amazing upgrade! They have made some decisions I don’t quite agree with - like removing the audio-tab from the right-click menu for instance - , some decisions I had to figure out but very much appreciate - like moving the colours-editor to the top menus instead of a button - , and one dropped ball in still not having fixed the old issues with automation trimming, but as a whole it has made my life a whole lot easier, and contains a lot of really really nice things.
Regarding if it’s as great an upgrade for music as it is for post, then I can’t say for sure. I left the music industry completely a decade and a half ago in favour of movies and games, and only do music for fun nowadays, an Industrial Ethno-project I’ve been running since the mid 90’s, but…yeah, I’d still upgrade. No doubt. No clue if you’d consider it as worth the upgrade as I do or not, I know nothing about your needs right now so it’s hard to give actual advice here, but I do heartily recommend it.
I do hope this conversation doesn’t end up going back to arguments I saw on pro audio forums of the early 2000s where people would say stuff like different hard drives had a different ‘sound’, and even if two DAWs produced a mathematically identical audio file, some people could tell that they sounded different and there’s more to it than the maths or some such other nonsense, because that shit was really tedious.
The CPU deals with mathematics the way the CPU deals with mathematics. The way the CPU deals with mathematics is common for all DAWs. Therefore in order for what you say is happening Steinberg would have to be able to change “how a CPU deals with mathematics”. I doubt that’s happening.
Saying that noone has done that probably just means that you haven’t seen it. We had this debate like a decade or two ago and Lynn Fuston created a large test of DAW summing, i.e. testing the audio engine. It included a bunch of different DAWs and what people said they hear wasn’t what they heard.
Go get the test files (“Awesome Dawsum”) and check them out yourself if you want. Or recreate the test, it’s not that hard.
What does that have to do with the “audio engine”? I have no idea who this person was or why we should all of a sudden care about that. This isn’t what we were talking about.
What does “sync” “the raw output” “to pulse-waves” mean in practical terms?
I don’t think you understand the details of Verifile. Just reading the manual it clearly states several key issues:
"the computer’s operating system is not designed that way, and (even if optimally configured, which they seldom are) it will, now and again, interrupt audio recording to do something which seems more important at the time. This is especially true when dealing with many channels of high resolution audio, perhaps with low latency, which needs a continuous high data throughput. The result is usually a recorded ‘dropout’ of some kind: anyone who has recorded audio on a computer is familiar with the manifestations: these include repeated or missed samples or entire sections, random clicks, pops - even channel swapping. "
Exactly what I was saying. Hearing a slight shift in timbre is not what the effect would be from the above errors. Clicks and pops yes. “Warmer”, no.
“Verifile is a ‘fragile steganographic’ process which embeds derivative data within the dither of the ADC,”
“, Verifile is not designed to survive any audio processing or encoding process”
What this means is that you can’t use Verifile to judge whether or not a DAW is sonically transparent, because as soon as you process audio or apply dither it won’t give you answers that show you anything of value in this context.
First of all, who says I or other people with my opinion aren’t listening to what we do?
Secondly, “confirmation bias” is actually a thing. So judging from all the testing I’ve done as an engineer, and from all the papers I’ve read, and from all my actual mixing I’ve done, what is most probable is that you’re either hearing differences that aren’t there or are attributing actual differences to things that aren’t their cause.
I’ve already said all there is to say about the transparency and will not comment more on that, but the Verifile is a system intended to make sure that an audio stream is completely bit-perfect from the coding down to the output. Every single sample gets its own check-code, and the very same up to single 192000’th part of a second the decoder gets wind of it, it will begin to monitor it. If an unprocessed file recorded through Prism-converters with the Verifile-coder active is played in a DAW and the Verifile decoder doesn’t react to it, then it is not bit transparent.