Setting input levels when recording guitars or vocals - do it on the interface, in Cubase or both?

I just thought - if I turn off all the plugins on the guitar and vocal tracks I recorded, perhaps the levels of the pure audio won’t be as high as I thought, so maybe they can be salvaged by turning down the PRE gain. I will give it a try anyway.

First of all, he mentions it’s much faster and more convenient to use a saved template than to have to set up all the tracks, routing etc every time. Now can you clarify - do we use templates only for the mixing stage, or is the template used for recording your instruments in the writing/recording stage? Otherwise I would need 2 projects saved for the same song - 1 for the recording stage, the other for mixing stage - can’t we just do it all in one? I think we’re supposed to modify the templates a bit to suit the instrumentation in our own projects, right?

About the 2 master channels that Chris Selim sets up. You mention a ‘reference track’ - do you mean a song by another artist which you compare your mix to as a reference?

Is this ‘ST OUT’ basically a routing channel for every other channel in the mix, if I understand that right? You say ‘final master’ - this means the Cubase standard ‘Stereo Out’ track right, and not the ‘ST OUT’ that Chris sets up?

Right, so as there are 2 mastering channels I have to figure out where to put all the mastering plugins. For mastering I generally use light compression, a Maximizer and a Brickwall Limiter (by using these last 2 together, it enables me to increase loudness with the Maximizer and make sure the output level doesn’t go over a certain level too). As I’m using Elements, which doesn’t have a loudness meter, I use an excellent 3rd party plugin called ‘Youlean Loudness Meter 2’.

Now the problem I encountered using only the Stereo Out channel is that the Maximizer and Brickwall Limiter have to go in the Post Fader slots, but it seems there are only 2 Post Fader slots in the Stereo Out channel (unless there’s a way to change this?). The loudness meter plugin must go after the other plugins so I can measure loudness (LUFS), but there aren’t enough Post Fader slots. How do I get round this? In fact, with the 2 master channels, I am so confused where I would put all 4 mastering plugins actually. Can you help? Thanks.

The free mini course he has in his site is about a mixing template, not a recording one. But in the channel you will fin a video telling how to set up a recording template (if you feel you need one).

Yes, it is a good practice to have a recording template and this is not the mix template. They have different requirements and uses. But think: you do NOT NEED to follow this advice, you can have your own workflow and, when you feel something could be better, then you remember the advices and can try if these help. Regarding having two projects, yes, why not, one for recording and one for mixing. To send files from the recording project to the mixing one is very easy. More: it is a good practice to bounce all your audio in stems, including VTS instruments. That is an audio file that has the length of the whole project. You send this to the mixing project and there you can cut it if needed. But again: you don’t need to do that if you feel comfortable with your own workflow. But you can try or at least remember.

And more: you can have another previous project, where you can do a pre production work. That is a project where you record your song, not caring very much of having new strings or making playing errors, where you record the structure of the song. Later you can hear it and look if maybe a synth is needed or a drum fill or a cut, or a part needs two more bars, tempo reduction, or you notice there is a guitar too much, and so on. Once you are finished with that you record again you definitive song, taking all those observations into account. You can work this way or not. I find it helps to have order.

Yes. This audio needs to be absolutely free of processing other than level matching with your mix and if you put light compression in your master channel you are affecting it.

Again, you can work as you want, but I self and many others don’t want to do mastering within a mixing project. Although you can have light compression and eq in the master channel, that is not mastering, but a part of the mix work, IMO. You can use a limiter in the master at mixing, but this is only to pre hear how will it sound later. When you are absolutely ready with your mix, you turn the limiter off (because this one is part of the mastering job and you don’t want to generate a mixdown too hot), do the mixdown to a stereo file and in a fully separated project load it and do the mastering work. If you like to do it together with your mix, your are free to do, but I wouldn’t work this way. It is always better to have fully separated projects for each kind of work, in my amateur opinion, but I know in the opinion of a lot of people who work professionally too.

1 Like

Yes, usually when I start composing a new song I will start with a project where I don’t worry about playing the parts perfectly - it is simply the compositional stage, trying out new ideas, working on chord sequences, riffs, melodies etc. Kind of like making a ‘rough demo’ of the song. Then once I’m happy with what I’ve composed, I then start recording the parts properly, with tight playing, tight vocals etc.

About sending ‘stems’ to the mixing project - do you have to get rid of the gaps between each audio part first, before sending them to the mixing stage? I mean, let’s say you record a vocal track - there will be gaps between each vocal phrase where the mic is still switched on, creating unwanted noise on the track. These bits need to be cut out of course. However this then makes your track not one continuous audio file, but a series of many different audio events with gaps in between. Can a track like this be bounced to a continuous audio file?

As for mastering, I see that Jedi Master Chris did a video about mastering in Cubase 11, which is great:

Now what I would like to know - he opens the stereo mixed file on a track in Cubase, ready to master. He adds his mastering plugins (compressor, limiter etc) as Insert effects on this track. Now where would I add the Youlean Loudness Meter plugin so I can measure loudness? Presumably it would have to go AFTER the other mastering plugins, but would it go as an Insert on the same track, or in the Stereo Output channel? Another thing is that the tutorials I’ve seen tell me that the Maximizer, Limiter etc need to be Post Fader Inserts NOT Pre Fader - I’m not sure where Chris adds them in the video above, but if he’s putting them as Pre Fader then I wonder why?

I forgot to ask - at what Sample Rate and Bit Depth would you export the mix at so you can master it without losing quality? I usually record my projects at 44.100kHz and 32 bit float, so would I just export at the same? For the final master, I’ve been told that 44.100kHz at 16 bit is fine as that’s CD quality.

Final thing - do you think Wavelab by Steinberg is worth getting for mastering? Maybe later on down the line I might consider buying it, but I think upgrading to Cubase 11 Pro and getting the hang of everything Chris Selim teaches would be best to do first.

Sorry to bombard you with a lot of questions at once, but there’s another thing I need clarification on. I promise these will be the last questions for the moment!

So, I watched Chris Selim’s video on gain staging where he colour codes each meter zone for different level ranges (in dbfs). I went into Cubase and set that up as he suggests, with the same colours. It works like a charm, and it’s pure genius! I turned the plugins off in the guitar and vocal tracks I recorded for my song, and I’m very pleased to say that the input levels are not as high as I thought, so are useable and everything should be salvageable with just a few tweaks here and there to the pre gain and/or plugin output levels.

However what I need clarification on is this: on the channel meters, there are TWO sets of number scales- one in the centre of the meter and the other the right of the fader (please see the picture below - I’ve circled the two different number scales. Correct me if I’m wrong, but the scale I should be using to check the dbfs reading is the one in the centre, right? (There are also two number readings at the bottom of each of these scales - I think this just says where the fader or level is set and the dbfs average or loudest part, perhaps). So I’m wondering, why are there TWO scales and what is the one next to the fader for if not dbfs? And when I set the faders at ‘unity level’, that is 0 on the number scale to the right of the fader, NOT on the centre part right? (I think so, because when I do cntrl + click, it moves the fader to this position).

Also, in the mixer preferences when I set the colours, the meter scale is to the left (circled in red in the pic below) - is this the same meter scale that is in the CENTRE in the mixer channels? I think I’ve set up the colour coding right based on this scale, you can see from the pic below)

Thanks in advance, Knopf, I really do appreciate all your help!

Yes you are right that this happens, and a lot of events need to be cut, but when you generate the stem you mark the beginning and end of the song and the new file doesn’t have gaps again, it is just one event. But the silences in between haven’t no more noise, they are simply silence. Maybe you don’t NEED work this way, but some people, I included, like to work this way, because of order. And more: later, when you import these stems in your mixing project or template, you can cut the events again in zones you notice there is no need to be a silence, again because of order. But you already have the stems of everything. You may think some operations are repeating themselves but reason is order. You could be sending them to a mixer engineer or to a partner when you cooperate with someone.

If you visit the good channel One Man and His Songs (by the way, best tutorial on Halion that I found in internet, very complete and deep), he shows once in a while his progress in the composition and mixing of an own song. Apparently he likes to try ideas and do the mix work and re recording and re composition all together. He finds this workflow confortable for him, and it gives him good results, although is no way the form I would do it, but I can’t tell him how he MUST work.

Although he knows how to master and occasionally do master work, it is not his speciality. He self says normally he send his mixes to a master engineer. If you see, in the whole channel there is just a couple, two or three videos about mastering, so what he shows you here is very very basic stuff, so that you can start something if you still don’t have the time to learn the depths of mastering.

For mastering I would recommend you to visit the web site of Production Advice from Ian Shepherd. There you can learn with a lot of more depth the technique (or art?) of mastering. This is a whole another world that needs a lot of study. He has a very good masterclass that opens once in a while.

Usually the plugins are put in the master but a lot of people like to master several songs in a turn, so compression and eq are put in the individual track, for example. The plug-in that is important to put it post fader in the master is the dither, when you want to generate a final master that has less bit depth that the mastering project. For example you master a file in 24 bit and will generate a final master in 16. You must use dither.

You can do you master work at the same sample rate and bit depth that your mix was done. Only after finishing the master you change the bit depth to the desired final value, using dithering. Some people like to generate a second super master keeping the original bit depth too, in case it will be needed in the future.

Regarding sample rate, some people like to work with 96 kHz or more. I work just with 44 or 48 (48 is most used in audio for video) and don’t feel the need to have more and my ears definitely don’t hear the difference with higher rates. Regarding bits, 24 can be a good value to record and have your audio files in project. But watch out, the bit depth that you used to record your audio, for example 24 and this 32 bit floating point you configured in Cubase to do the processing are different things. In Cubase you can configure to work even with 64 bit floating point to do the processing.

I can’t answer responsibly as I have never tried it. I just know that a lot of pro people use it. I find your idea of going to the pro version of Cubase is a lot better. You could start using stock plugins for mastering and, when you feel the need (if that happens, maybe never), have a specific software for mastering. I myself don’t own any mastering specific software, but have some specific metering plugins that help.

1 Like

I think even if your audio were to high I see no reason to don’t be usable, provided it has no clipping or playing mistakes. If now or in the future you find a track to loud, simply turn the pre gain down. If you have clipping that is another story.

Regarding scales:

One is the fader’s scale, located at the right of the fader (left red circle) and the other is the meter scale (center of meter, your right circle). What you set up with colors is the last one.

Why are they different? A fader scale means amplification (or attenuation), not metering, so they are different things. Aside, you can set your fader scale in different ways but the meter scale ends at 0 dBFS because this is the end of the world. But you are able to amplify a signal in +3 dB if you set your fader there. And yes, with control mouse you put your fader at 0 dB gain.

1 Like

Thanks very much for your replies, Knopf. I think I am well on the way to figuring this out and it’s time to finish my song!

Yes I think my tracks are mostly salvageable, apart from maybe 1 or 2 vocal phrases where I was REALLY belting it out, but it’s no big deal to re-do these. In fact, here is a reply I just got from Jedi Master Chris himself:

Glad you like the channel. You said you recorded too hot, but the reality is that if you didn’t peak the signal on your way in your recording is fine. From this point, the levels can be gain staged in your project with pre gain etc…or with the Output level of a plugin like the Amp Rack. So yes, the Output level of the Amp Rack will preserve the gain coming into the Amp Rack so your Distortion tone will stay the same :slight_smile:

Regarding your point about Bit Depth: ‘But watch out, the bit depth that you used to record your audio, for example 24 and this 32 bit floating point you configured in Cubase to do the processing are different things’.

I did not know these are different things. All I know is at the beginning of the project I went to Project > Project Setup and I set the Sample Rate to 44.100kHz and the Bit Depth to 32 bit float (using Wave format of course). So does this mean the audio is recorded at 32 bit? I have no idea what ‘float’ means, all I know is it’s a good idea to use that setting for high quality recordings according to tutorials I’ve watched. You say ‘they’re different things’, so am in the right menu for setting the Bit Rate I’m recording at?

In this place Project/Project Setup/Recording format you actually configure the recording parameters. I and a lot of people normally use 24 bit. I don’t know if your interface supports 32 bit at recording, maybe not. Check the manual. But 24 is good enough and most interfaces support it. Check if your interface needs a separated set up of bit depth too.

The other configuration is under Studio Setup/Audio System. There you will find you can set up the processing precision at 32 or even 64 bit float. That is the bit precision Cubase will use to do all internal calculations.

Float means floating point as opposed to fixed point. The better do a search if you want to learn more about this concepts about digital processing.

1 Like

I give you some other channels I like regarding mixing

Home Studio Corner with Joe Gilder
He uploads content permanently and has four series called Mix Together, where he shows the whole process of mixing a song from the very beginning: importing tracks, sorting them, testing phase issues, static mix and so on. Very enjoyable all four series.

In the Mix

Audio University not only for mixing though. Audio in general.

Produce like a pro with Warren Huart.

Paul Marx if you can German. I am not German but can the language and have discovered this one a couple of months ago.

Recording Revolution maybe the one with most subscribers. Not my favorite, though, but I took some advices from there.

You can search some videos of Mixerman, too, that are distributed in different channels.

1 Like

Alright, I’m using a Tascam US-2x2 USB interface, which is fine for me as I only need the 2 inputs - 1 for a microphone, 1 for guitar - since I record everything else with virtual instruments. I looked in the manual and it says it’s 24 bit/96kHz. Now what I’m wondering is if I set the Record File Format in Cubase to 32 bit float, will it really be 32 bit if my audio interface has a maximum of 24 bit? Will it mess things up to have a setting of 32 bit (float) with only 24 bits coming in?

You mentioned ‘The other configuration is under Studio Setup/Audio System. There you will find you can set up the processing precision at 32 or even 64 bit float.’

I don’t see the 32 bit float option in Studio Setup/Audio System - this seems to be in the Project > Project Setup window, as in the pics below:

I looked in the manual for my interface and there is nothing about it needing a separate setup of bit depth, however on the control panel (see pics below) it says the Sample Rate is 44.1kHz, not 96kHz for whatever reason (maybe because I’ve got a setting of 44.1kHz in Cubase):

Yet more screenshot3

Yet more screenshot5

I really don’t know what happens if you set 32 bit float recording format and your card can only until 24. But what I would do is simply to set up 24 bit recording depth, as your card can’t higher that that. It seems that this is enough and you don’t need to configure that in another place too.

If you are wondering in which format are your recorded files, go to the pool with control-p and check. For example

imagen

No, what you see in Project > Project Setup window is the recording format, not the processing precision. In your picture please click where I marked with red:

Here is my window, for example (sistema de audio is “audio system”). Look at the right and you will see Precisión de procesado, so “processing precision”. You can work with 32 bit float or, if your machine works well, you can try with 64.

that is surely because you configured 44.1 in Cubase, it is OK.

I’ve been following along but not commenting much, mostly because @Knopf is giving such detailed answers and I also seem to be in a timezone that doesn’t match the back & forth of the conversation.

I think it might be useful to look at this stuff from a wider perspective than the mechanisms used to do gain-staging, mastering, etc. For example with gain staging none of the advice you see online is a ‘law’ that must be followed. Rather they are best-practices to achieve specific results. For the gain-staging advice that result is a clean solid audio signal without distortion, a low noise floor, and a signal level that is not too hot or cold. Great, that’s what most folks want most of the time. But you may discover that overloading and clipping the snare while recording is exactly the sound the song needs. If so, then the right move is to clip the snare’s audio even though it is ‘breaking the gain-staging rules’. The point is that all the info we get from various meters and the like - all that information is only there to assist us in understanding what we hear. But those readings are not what we should make our decisions on (well sometimes…). Ultimately what you hear should determine what you do. Those meter’s purpose isn’t to tell you what is right & wrong. They are there to provide additional, often calibrated information about what we are hearing so we can make more informed decisions.

There is an old saying “A man who is his own lawyer has a fool for a client.” Something similar might be said about mastering (although hopefully less harsh). While necessity might cause many of us to self-master, we are kind of fooling ourselves when we do so. Sure it is possible to become proficient in the all the mastering tools-of-the-trade. But that only covers the mechanics of mastering. The true essence of mastering is listening & hearing and that is where self-mastering has limitations.

First you are using your own ears to both mix and master the song. A key part of mastering is hearing the song new, with fresh ears. This allows the mastering engineer to hear problems & issues that you don’t notice because you are used to how the song sounds. So part of mastering is introducing a new listening perspective.

Second you are using your own ears. Can you listen to something and think the area around 1.3K needs to be lowered by about 2dB? I certainly can’t. But a skilled mastering engineer can hear those types of nuances. They’ve put in a lot of time understanding how to clinically listen to sound. They will hear and correct things that you & I aren’t even aware of. Additionally they will also understand any limitations in their own hearing - they will know if they have any hearing loss in a frequency range and be able to adjust for that. The famous producer George Martin suffered increased hearing loss as he aged and brought in his son Giles to ‘hear’ for him - he knew his own hearing in enough detail to make adjustments based on another person’s description, which is kind of amazing.

Third you are probably mastering in the same room you mixed in. Any problems in your room acoustics will generally be reinforced and not corrected. For example if you get bass buildup in the room (quite common) that will encourage you to mix the bass low because the room is making it sound louder than it really is at the speaker. Then if you master there instead of hearing and correcting for the absence of low end, at best you’ll do nothing and you could quite easily make it worse.

Back just before streaming collapsed the music industry, San Francisco State U. had a program for the music business. This covered a range of classes from publicity and how to structure & create a record label to mixing and mastering. Which is how I got to have a semester learning about mastering from Michael Romanowski who is big in that world. The class itself was fine, getting good info from the instructor but the lab was a bunch of low-end Protools systems with bulk purchased educational-discounted headphones. So not the best environment to hear results. The class was small (six-ish) so he held one class at his studio where he went over the very high-end equipment there. And then he let each of us schedule a 1 hour (or maybe an hour and a half) session in his room to master any material we had. You cannot imagine how different the song I brought in sounded without any tweaking just because it was in a finely tuned acoustic space.

A top end mastering room is unlike any recording studio. There is a very sparse zen feel. The desk is small with a very limited amount of equipment or really anything in the room. A recording studio feels like a workshop while a mastering room feels like a temple.

My takeaway from the class was a better understanding of the mastering process - but also a realization that to do it properly requires a bunch of esoteric skills. Skills I didn’t really have and also not ones I valued enough to put in the time and effort to gain. So I learned that my best move would be to use a proper mastering engineer and not do it myself. Have I always done that? No, because necessity dictates otherwise and the world isn’t perfect.

Don’t take any of this to discourage you from exploring all of this. But audio engineering is a big complex world that is as challenging learn as any musical instrument.

2 Likes

Hi Raino, glad you’ve found this thread enjoyable and useful! Knopf has given amazing answers.

I do agree that we don’t always have to follow every ‘rule’ of mixing and mastering religiously, and that sometimes it might even suit a song better to break the ‘rules’. However for someone learning the ropes of music production, there are basic fundamental principles that need to be learned, and they can be broken later on if someone understands what they’re doing.

As for hiring a mixing or mastering engineer, of course that has its advantages and you get access to their years of experience as well as high end equipment, room acoustics etc. Many big name artists who we look up to as ‘musical gods’ in fact lack any real knowledge of recording, mixing or mastering and usually have someone else doing all those things.

Me personally, I’m aiming to be totally self-sufficient and get to Jedi master level at music production so I can be a complete all round musician as well as producer. I know it’s a steep learning curve, but I apply the same principles as I did to learning and mastering every instrument I play. I trained for years at those, as well as vocal training. Music production is just as complicated, if not more so, than all these and takes a lot of time, dedication and effort to master but I plan on getting to that high level with hard work.

Ok, I did not know I had to click on ‘Audio System’. I tried it and here is a screenshot of the menu that popped up (it appears I have recorded my song with the processing precision at 64 bit float - I did not know until now!):

I also checked the pool, and it seems that my files are actually 32 bit float despite my audio interface being 24 bit:

Ok so now that I have a bit of knowledge about gain staging, I went back through my song and tried gain staging each track by bringing down the PRE gain or the output of the VST instruments. However I’m not sure if I’ve done it right. Here is what I’ve done so far, is this right?

  1. I set all the channel faders to ‘unity gain’, i.e. 0 on the fader scale (I presume I need to do this when gain staging parts which have already been recorded, and not only when setting input levels at the beginning?).

  2. I set the global meter position to ‘Post-Fader’ (I tried ‘Input’ at first, but nothing happened to the meter levels when I moved the faders, the PRE gain control or the output of the VSTs, so can I presume ‘Post-Fader’ is the correct setting?). I did not try the ‘Post-Panner’ setting.

  3. I then went through every track (with the Inserts and Send effects switched off) and I reduced either the PRE gain or the VST output level until every track looked roughly -18dbfs by eye (using the colour coded meter settings like Chris Selim suggests). Some tracks peaked a little bit higher than -18dbfs, maybe around -16dbfs. Is this ok or could I go higher, like have the peak levels up to -6dbfs?

  4. I then switched on the Insert and Send effects to check the levels of each track with these on. The highest peak level for any track was around -11.9dbfs with these on.

However now I’ve noticed a few problems:

  • All the instruments are now out of balance with each other in the mix (the drums, coming from Groove Agent SE, are WAY quieter than everything else since I turned down the output of Groove Agent to get the correct gain level. So presumably I when mixing I will need to turn all the other instrument faders down to match the drums. This will result in the channel levels being way lower than -18dbfs, hopefully this will be ok ).

  • When I change the global meter settings back to ‘Input’, the levels on the meters are still the same as previously, i.e. before I gain staged them. I thought these were supposed to change? But they only change when I have the global meter settings on ‘Post-Fader’.

  • Back in ‘Post-Fader’ mode - when I move the channel faders up or down to sort out the balance between all the instruments, the input levels on the channel meters also change. This makes it confusing to know whether I’m still in the correct range, i.e. around -18dbfs. So what am I changing here? I thought that by moving the PRE gain, that is how I adjust the input levels to keep them in the range of around '-18dbfs. So when I move the channel faders, am I altering the actual input level or just the output of each channel? In either case, the channel meter levels change.

This is a head spinner to say the least! So am I doing this right, and am I in the right global meter setting mode etc?

I am sorry to keep bombarding you with questions, but I really need to understand what I’m doing here so I can get this right! Thanks.

Remember earlier when @Knopf advised this below. You just discovered why that advice is useful

1 Like

Ok yes, I am reading Knopf’s advice again. So now I must go to the static mix stage. I noticed that the drums, coming from Groove Agent SE, are WAY quieter than everything else since I turned down the output of Groove Agent to get the correct gain level range. So presumably when I do the static mix I will need to start by turning all the other instrument faders down to match the drums first (I don’t think it will work to turn the other instrument faders up as they’re already at unity gain, so not enough room to move them up much further).

Hello Raino,

thank you for your post, that has already scratched the poetic side of this audio world. It was amazing to read about your experiences too.

I agree with that that you say about another pair of ears and dedicated rooms. It happens though, for some people, me included, that this activity is highly motivating (regarding learning and doing) and the most time we want to do these things not thinking in saving money but in doing them our selves. So maybe our songs will not end with the best quality, but they are ours. At least in this stage.

That the Power of the Great Mix be with you.

1 Like