Please share info

Hi there,

I must admit to be somewhat ignorant using all “modern/fancy” procedures when it comes to audio mixing.
But…coming from a 25 + years of pro analog music engineering world and doing post for TV exclusively these days I need to ask this Q;

what is the under the hood processing that Nuendo applies to my stem mixes when selecting normalize & peal limit to a spec ?
So far I was using my ears, calibrated monitoring and landed in the legal ballpark. My stems were individually peak limited by Nugen ISL.
So…today, for the heck of it…and just for trying it out I used Nuendo’s built in normalize function that is available in the mixdown menu.
And yes…it was a tad quicker and analyzing it showed a perfect delivery spec.

Please comment …

I would expect it to do first an analysis of the exported file and then simply normalising the file to hit your target. Simple normalize, nothing fancy. And then if anything goes over a firm limiter to hit true peak target. Nothing fancy.

Andrew,

I think Matthias is right, it’s just a simple loudness measurement and normalization.
AFAIK, (but I haven’t tried) this works for all exported stems. That kind of makes it useless for me unless you export just one file (the main mix). There’s no use in having a NAT or SFX stem normalized to -23, if you ask me.

Also, I find other tools to be much faster. Try Wavelab or Nugen LMB for insane speed. Wavelab does not do surround, though.

Never touch you stems.
If your mix is in the ballpark of -23LUFS (and only then), you can normalize your final mix so it hits exactly -23 LUFS.
Your stems however, need to stay untouched, so someone else can hit his/her loudness target after dubbing or remixing. Anything you do to your stems will highly confuse the people who receive them.


Fredo

That’s what I meant, but the function in Nuendo does, right?

OK guys, thanx many …I do understand all that. But >> I will try to explain the process I have just experimented with this morning. It was just a try , have never done it before.

I have set up a couple of stereo stems directly fed by various single tracks and groups on each. Then these are batch exporting as individual stem masters. I never touch any levels on those stems ( all my adjustments are made before the audio is direct fed into them ) except I do have the Nugen ISL limiting to my specs.
That is actually my normal workflow.
In addition I have a “sum master monitor” which receives an always untouched post ISL and post fader send from the stems so I can just monitor audio on the summed final program.
As my room is well calibrated and beside my ears I also use a Dorrough loudness meter on that I’m usually bang on [ +/- 1 LU ] for -24 LKFS ( Canada )

So my experiment this morning was to take out the ISL on the stems, and not monitor my final mix sum, instead just using the Nuendo normalize and peak limit function on the mix down menu.
So…I am not dealing with an already existing mix , but batch exporting the bare stems to new files.

After that …I grab these new, processed stems…import them into a new test project, and run Nuendos loudness track to quick analyze this new test mix.


TP was 1.2 dB off and loudness came out to -23.9 LKFS.
Surprisingly that mixdown, including analyzing and batch exporting the original sessions stems were faster than I normally get through my customary method.
Therefore I’m simply questioning why would that be faster ( a good thing ! ) and is my audio getting badly affected by using just a tiny amount of correction at the batch export stage…? I didn’t have time to carefully listen through the resulting experiment ( 57 mins )

Wow…I’m hoping you can make sense out of this…sorry guys…English isn’t my first language.

Thanx for all who responded :sunglasses:

Not sure what you mean by this.
I batch-export my finals in two runs.

  1. Final mix (5.1, stereo, xxx) - With the normalizing function flagged (-23LUFS - minus 1 dBTP-
  2. My stems. With the normalizing function deactivated.

Fredo

Maybe I wasn’t clear enough. I export everything in one go (well if there is no M&E, that is).
Then I drop the main mix into Wavelab (fastest stereo) or Nugen LMB (fastest surround solution)
What I meant is, I don’t think you can export everything in one go and have the normalizing function only active for the main mix.

Oliver

Fredo & Oliver…now I know what you guys mean.
I only simply ran an experiment as described in my last post, not using a main mix out, only the final mixed stems. I have applied the Nuendo normalize/TP limit process on this batch export. Just to check what would happen and how fast.

After that these new stem exports were imported in a virgin project, mixed down to a stereo export , then the resulting mix file was analyzed. It was right at target, and my export from those stems were much faster.
That is why I have asked your comments in the first place.

@ OLIVER >> I don’t have LMB and my Wavelab is old , on an another old PC doing DDP exports and disc burnings.

The only thing about processing the stems is that the possible variance would be fairly large. If you have for example music, dialog,FX, and they happen to all peak at-1 at the same time you’ll be clearly over. So you’d need to control the master anyway.

This will be my last comment on this subject…without sounding ignorant & stupid , I must say that I fully and completely understand what all of you are saying . And yes…I do have some deliveries, that need stems and master mixes.
But…lets just say ( and that was the purpose of my experiment and this posted thread ) that client X doesn’t want or need a “final master”…only my stems. They will be creating their own multiple versioning from my previously submitted and mastered stems.

So…in my test these “test only stems” went through Nuendos batch export using the normalizing setting.
Then…they were summed and the resulting mix analyzed. In a new virgin “master” project.

Does that sound clearer ? Even though that workflow might not be my usual one…it was simply a “test” to check rendering speeds.

Gosh…so sorry…I’m outta here

You don’t sound ignorant or stupid or anything, and I understood what you were doing. All I was getting at was that for final delivery, by you or anyone else, the final sum of the stems needs to comply with whatever spec is relevant. So, if you export the stems with that functionality in Nuendo either you or your client still has to process the master file again when it’s created.

Now, since you can already hit levels within a dB (or +/- 1dB) it seems to me that your old workflow was quite sufficient. I’m basically saying that you don’t really seem to be gaining much by doing this, unless I’m missing something.

I would say you should see if there is still an upgrade path for your version, because WL has taken a huge step recently. I was not very happy with it for a while, but version 9 is a lot better (my main concern was the cluttered GUI, that was just too complicated for me)

I hope it will allow to edit multichannel files in the next version, that would be my biggest grief at the moment. Anyway, Phillippe is one of the cool guys that has been around for a while (a bit like you and me, I guess) and he does a brilliant job with WL.

Just as I was looking for an older subject ( the bloody MAAG 4 by PA ) I’ve seen this conversation .
I’ve read it top to bottom and see what you guys meant.
I would like to clarify one final time that it was a simple test, checking out Nuendo’s normalizing function the first time vs my regular batch exporting route with ISL2.
That was the only difference and according to my results… the batch export of the stems has sped up by at least 35-40 %.

:slight_smile: