EVERYBODY gain stages, it’s da koolest!
You guys are still using gain staging…?
I heard free range gain is what all the bigs are using nowadays. Some are even using vegan gain, but that’s too far in my opinion. Nothing wrong with a little light saturation on your gain if you enjoy the taste.
Similar to side chaining compressors were a few years back
Sidechaining has been around since the 1930’s. It’s not really a fad. Frankly, I think nothing in mixing/mastering is a fad, it’s just a bunch of people who finally have access to technology that 20 years ago people would not have dreamed of having in their bedroom, and trying to sort through all the in’s and out’s of engineering audio.
Especially if you’re mixing in 32bit floating point bit rate or higher. You have a LOT of headroom.
I stopped doing this since most places take 24bit, and you would have to down sample. I don’t know, maybe down sampling isn’t so bad?
I just mean that it’s a problem I think at least many of us who have been around for a while don’t really run into because of how we learned engineering in the past.
I suppose, but I’ve been self-teaching for around 15 years and making sure the levels are good using a VU meter is nothing new. The idea of gaining the wav first instead of touching the faders is, and it helps immensely imo. Now I know that my audio is relatively normal throughout, this way I can then begin to add my EQ, Compression and have a nice, dynamic and smooth mix all around. Just things I learned from industry professionals that make a living doing it, which is where I want to be this time next year.
But then again, maybe my mixes suck and yours are amazing and you have built a career out of it? What I do know is ever since I learned gain staging, mid/side EQ, and sidechain eq and compression the proper way, (amongst other things) I feel that my mixes have gotten substantially better.
If you are a professional whose career is audio engineering, I would love if you would be willing to lend an ear.
If you haven’t already then I would definitely put some thought into your monitoring situation. If the room is set up well then I would argue that at that first stage when you add audio you would be turning things up or down to hit a reasonable level, because you would adjust to how it feels, and because how it feels has been set while calibrating your setup. In other words when I said earlier that I assumed setup was good I was basically saying we’re still “gain staging”, it’s just that it isn’t a separate step in my (our) process.… not that we aren’t gain staging at that point in the signal chain.
Just a thought.
Thanks, yeah I am really solid on all of that. And in fact, I realize a situation where “one size fits all” doesn’t work. It’s kind of like, don’t just adjust stuff on a compressor because the little line is jumping down and that’s where you want it, use your ears boy!
So, though I still appreciate the idea of gain staging, I think I understand where you are coming from as well. I think for me a mixture of both, yes because sometimes you can bring up, for instance, a high hat to 0dbFS on the VU meter, and OH BOY!
So, gain stage using your ears, not just going to a point on the VU meter all the time is what I am taking out of it.
I recommend you mix in floating point and use a dither when you need to render the audio to a lower bitrate.
I recommend stop watching “Fix the Mix”…
Sure, you can use a dither, but why would you want to do that to something you worked so hard on? I mean, I suppose there is no audible difference it’s a pretty amazing algorithm.
I will try my next mix at floating point.
Mix in float or mix/export to float? All mixing in a modern DAW is done in floating point at either 32-bit or 64-bit. Destination bit depth at float or fixed doesn’t matter, it’s still a mix in floating point.
As for dither that should probably always be applied when reducing bit depth, and if the output is 24-bit fixed then I’m guessing you can dither or just leave that for the mastering engineer to deal with. Probably makes no difference.
You said to mix in 32-bit for more headroom, the issue is you have to dither down to 24-bit most of the time because majority of streaming platforms don’t take 32-bit. So, why work really hard making a nice dynamic mix, just to dither it at the end.
Yeah, probably can’t hear the difference, but it’s more just my anal side knowing it’s there. Would rather just mix in 24-bit, export in 24-bit. Don’t need that much headroom lol. I record to -12 and gain stage most of my stuff to -12 anyway.
It’s more like the 32-bit float signal’s actual “definition” has the same “range” as a 24-bit fixed file but it floats across a wider range giving you tremendous headroom, even more so with 64-bit float. So you’re gaining the ability to not clip while mixing. When exporting you aren’t really losing anything.
But more importantly, you can’t mix in 24-bit fixed in Cubase or Nuendo. It isn’t possible. You can use files that are recorded that way but once those files are read and used in any way in the audio engine they’re converted to floating point. That was my point. It’s a floating point process and then you end up with fixed point.
As for dither: It’s something that is recommended at the end of the process if you think the lowest part of the signal might be audible above the noise floor of the listening environment. If that’s the case then you should dither it after processing, which includes truncating bit depth.
You can’t mix in 24-bit. It’s a non-issue.
Not unreasonable. Personally I use the entire range in fixed point upon delivery (TV). Peaks around -2.5dBFS and average according to whatever standard I need to meet.
Additionally to what @MattiasNYC said:
In my mind I try to keep it simple regarding 24-bit integer vs. 32-bit float. As long as I am not delivering a final product for consumption by the general public I save everything in 32-bit float. Not because of the 32-bit being larger than 24-bit but because of float being better than integer while working on audio.
So, 24-integer for the final “this is the release” file. 32-float for any “were still working on it”.
Every mastering engineer should by now accept to receive 32-bit float files. Hell, they should even be glad if they get the file in this format.
Nope: defintely not to to somehow auto-set gain staging for all tracks at once, silly. Ditto normalisation which is a completely different process and outcome.
No short cuts here: each track needs to be gain-staged separtately & for obvious reasons re. the content of each track. Use a good VU like Klanghelm’s VUMT & set its reference level accordingly (I use -18). Then tweak each track with the waveform gain or pre settings. Also helpful if the meter settings for breakpoints correspond.
Possibly not quite so much of an extreme issue with our own in-house tracking, but I also get a lot of external projects for mixing and mastering & am always stunned by just how far these are slammed … & as if these awful DAW faders are RMS and not Peak. -18 should equal 0db for all intents and purposes.
I also up-sample everything to 96kHz /32bit FP. Adds many zeros and does nothing at first blush, but is far more deatiled in adding processing, then OTB mastering, then re-recording (Pure 2 A/Ds).
All of which has been written about to death over the last few decades. Maybe read-up on Bob Katz for example.
@ raino
Actually I wanted to do that yesterday. I had a few basic tracks. I had them on a VCA. I wanted to get all the faders re-adjusted to at or near the zero mark. As they were if I could have just adjusted all the pre-gains by 5dB in same direction that would have worked for me. I thought if I selected all the necessary tracks and adjusted one pre-gain all would follow but they didn’t. I was in a hurry so just did them individually.
Thanks I think that’s what I should have done re: Q-Link. I was in the new far left panel Channel view and it def. did not link to other selected channels. I thought the OP was trying to do same as I was but he’s going after something entirely different.
@TG_Karibou you might be interested in this video… watch until end.
@philorose your dither concern for 24 bit is over 100dB down.
Unless you are recreating John Cage’s 4’33" at >100dB SPL no one will hear the dither.
Have you ever heard Widespread Panic Earth To America. It was mastered to sound like one long loop the last song is very very very long fade out and the first song has a quick little intro bit that is the same thing. But that’s a 16 bit CD if you think you can hear dither in a real world example of pro level hit making musicians and engineers that would be a good one to check out.
Man, I didn’t know that! Then based on what you and Tom_B said
your dither concern for 24 bit is over 100dB down.
It doesn’t really matter, and seems better to do.
Nope: definitely not to somehow auto-set gain staging for all tracks at once, silly. Ditto normalization which is a completely different process and outcome.
Yeah, something automated like this just doesn’t make sense. Half the time you think you might have found a “workaround” in audio production, you realize “well, I should probably just spend the time and do this the old fashioned way”.
Do whatever sounds good. If you have time along the way try to figure out why it sounds good.
That’s basically what most of us do.
That’s basically what it comes down to. I personally think my mixes shine when my levels are gained properly and it makes the EQ and Compression stages that much better, so that’s what I do.
I was thinking it would be nice if there was a one size fits all solution where you slap a button and it gain stages for you, like StudioOne has, or Reaper has. In Reaper you can actually normalize based on LUFS which is pretty cool.
Either way it sparked a whole thing about gain staging, or not to gain stage lol. For me it comes down to, what are the guys making a living off of music doing? Gain staging, copy that, it must work.
Coming from cutting timber, I have learned to understand that when the old timers tell you something works, it’s because it works.
To come back to the beginning: What is gain staging? If one refers to finding the sweet spot level for plugins: this is usually required only for analog emulation plugins. Managing relative loudness levels? That is called mixing and I usually do it for every sound source one by one (because that is usually how you mix); depending on how many tracks will play on top of each other I might start at -18dB, -12dB, or -6dB for the first track bit this value is just for me and my brain. In a floating-point audio engine this is technically irrelevant as you can always just turn down the final fader.
In short - as long as you stay in-the-box old school gain staging is only required when working with analog emulation plugins (e.g. compressor emulations).
If you don’t use those kind of plugins you can start at 0dBFS and mix into the hot without any issue whatsoever. Just turn down the final fader accordingly. People that proclaim the -18dB dogma for in-the-box-mixing are old-school analog-thinking fossils that will die out eventually.