I’m new to music production and Cubase is my primary DAW. I’d like to understand the best practices before starting a mix.
I currently have a project with around 30 tracks (25 instrument MIDI tracks and 5 audio tracks), and I have a few questions:
1. Gain Staging
Before adjusting faders, I select all tracks and reduce the pre-gain by about -3 to -5 dB. Then, I fine-tune the pre-gain for individual tracks to maintain around 6–8 dB of headroom and avoid clipping. At this stage, I leave the instrument group buses untouched.
Is this the correct approach?
2. Bus and Master Clipping
An observation - After gain staging, the individual tracks do not clip. However, the instrument buses are close to clipping, and the stereo output is definitely clipping.
What is the best way to address this? At this stage Volumn faders are at unity gain.
Should I further reduce the Pre-Gain on individual tracks?
Or should I lower the Pre-Gain on the instrument group buses instead?
3. MIDI vs Audio for Mixing
Is it recommended to bounce MIDI tracks to audio before mixing?
If so, should insert plugins be applied to the bounced audio, or is it better to apply them directly on the instrument (MIDI) tracks?
4. Volume Automation
I’ve created an initial mix using the faders, but once I enable volume automation, it overrides the existing fader balance.
Is there a way to preserve my current fader levels as a reference point (instead of defaulting to unity gain) and apply automation relative to that?
I would prefer to automate individual instrument tracks (all 25) rather than using group automation using VCA Faders.
Before giving my response to your questions, it’s probably worth mentioning a few caveats: First, different people will have different approaches – I’m not sure there’s any right or wrong in this, but more what works for you. Second, I don’t profess to be any expert in this stuff, so my responses will be more just what I do or what works for me (or what I can get away with?).
This is one of those “it depends” things. Personally, I’m more likely to use something like a VU meter or RMS plugin to get a better sense of average levels than just peak levels as, at the start of mixing, some tracks could have a lot of variations in dynamic range between the softest parts, the loudest parts, and the average, or at least most typical, levels. Also, depending on the types of plugins you are using, you may need to have some specific general level for optimal performance with the plugins. For example, some analog processor emulations might be looking for something around -18 dBFS.
Thus, while, especially in time-sensitive situations, I might use Cubase’s track meters to guide quick adjustments, if I’ve got more time, I’ll generally use something other than just peak meter readings. And I definitely would be likely to use the pre-gain before hitting plugins, if needed. What that setting will be, whether a reduction or increase in level, will depend heavily on what is coming into the track. Different virtual instruments can be wildly different on that front.
There are a few possibilities here, but, again, it’s going to depend a fair bit on your situation, such as the consideration of what plugins you’ll be using on the group buses, and if those need certain nominal levels, and if you are needing to do fader automation on the tracks feeding those buses.
If you’re doing fader automation at the track level, it is probably simpler to reduce the pre-gain at the group bus level (or maybe use a VCA to offset all individual track-level faders). Otherwise, you could set individual track-level faders to result in the level you want coming into the group bus.
One thing I would not do at this level is adjust the pre-gain on the individual tracks as I’m assuming you’ll have insert plugins on those tracks, and adjusting the pre-gain there would affect the processing in those plugin chains. For example, let’s say you have a compressor as an insert on one of the tracks, and you now lower the pre-gain on that track. You’ve just affected the way the compressor behaves, making for less compression because the signal coming into the compressor is crossing the compressor’s threshold less frequently or by less.
I assume you mean Instrument tracks, rather than MIDI tracks (MIDI has no sound, only the instruments responding to the MIDI data do). This question really goes to a combination of your system’s performance and your personal preferences, not only just “in the moment” but also with respect to later project archival.
For example, if your system can’t handle the entire load of number of tracks, virtual instruments, and other plugins, you’ll have no choice but to either freeze some of the tracks or bounce them to audio to reduce CPU load.
But, with respect to project archival considerations, you may want to have the audio for the tracks available in case of future remixes and virtual instruments and/or other plugins that may no longer be available to you at the time.
I would, at minimum, render Instrument tracks to audio for the archival consideration, and I’ve recently started mostly mixing from rendered audio. As to whether to include the plugins, I’d say, if the plugin is an important part of the sound (e.g. amp simulator on a virtual electric guitar), it should be included. If, on the other hand, the plugin is a mixing choice (as opposed to the equivalent of what you’d track to tape in a live instruments/recording studio situation), then I’d be less likely to include it in rendered audio since you may want to change that at remixing time. Of course, you might want both options, and you can always keep some backup versions that are disabled within a project for that purpose.
What I’ve come to doing most of the time recently is rendering instrument tracks to audio with the specific plugins that would be part of the “to tape” (including tape emulation as the last plugin in the chain) track and disabling the instrument track that had that part of the processing in it. Then I’ll do my mixing chain plugins (e.g. probably starting with console channel strip plugin) on the rendered audio tracks.
Unless you’re using some control surface that will be sending absolute values to do your fader automation, volume automation should be starting from the static level you’ve set – just remember to create a node at some point just prior to the first change you want to make in its level. I guess that would be the “reference point” you’re alluding to. Cubase’s automation curves don’t default to any level other than what you write into them.
At least for my uses, the reason you might want to use a VCA fader, once you’ve done the individual track-level automation, is if you now need to add to or trim the level going into the next group bus down the line because the fader automation has changed the overall levels feeding that bus significantly. Then the levels between tracks stay in balance with each other while affecting the summed level feeding that group bus.
Very kind of you to explain all the points in such great detail—thank you
Regarding my fourth point about volume automation, I’d like to reframe my question:
After gain staging, I pull down the faders for all tracks and then gradually bring them up to establish the overall balance. Now, I’d like to add volume automation to each Instrument Track without altering the current fader positions.
How can I achieve this in Cubase?
What I’ve noticed is that when I draw volume automation on an instrument track, the fader starts moving up and down. I end up losing the initial fader position that I had set before adding automation. This ultimately disrupts my mix balance.
I’m not sure if I’m missing something or approaching this incorrectly.
That’s what I was meaning when noting to create a node at some point prior to the point where you want the volume automation to begin. As long as you just add the node at that point, it should start from the level you’d set the fader at (unless you specifically draw a node somewhere else). You may also need to add one afterward, if you want the level to return to the same point after the automation moves.
Just to give an example, let’s say you have the track fader on a given track at -12 dBFS. When you add a node for automating the volume in the volume automation lane for the track, that first node would be at -12 dBFS unless you specifically put it somewhere else. If it somehow goes somewhere else, you can also just type in the value you want in the correct field in the info line.
If I want automation changes between two points (but not before or after), I’ll generally create the end points, then draw the start and end point of the area where I’m raising or lowering the level, select both of those, and just type the value in so both of the points get the same level (this is assuming a static change in that area – e.g. raising or lowering the level by 3 dB). Then, if I want to change the curve of the ramp up or down, I can just adjust that part on each end.
Ahh, got it! I’ll apply this to my project now. Thanks
Just a thought—this method works well when there are only a few tracks, but once it goes beyond 50, it can become quite laborious.
I was wondering if Cubase has—or might introduce in the future—a feature that allows to lock fader positions and then apply automation on top without affecting the existing balance.
I’m not sure I understand. If you’re automating the fader on 50 tracks, you still need to add that automation to each track. This is assuming, of course, that you don’t want the identical automation on all 50 tracks. If you do, though, that’s what a VCA fader could do. You just add the VCA fader to all 50 tracks, then automate the VCA. The balance between the original 50 tracks faders wouldn’t be disturbed by that.
I’m not aware of an existing feature to lock fader positions, though that doesn’t necessarily mean there isn’t one (maybe someone else knows?). I know you can lock a fader by setting up a single fader automation node at the beginning of the track and leaving automation Read turned on. Even if you move the fader, the automation reading will restore it to the position you had as long as automation read is on.
As for the applying automation without affecting the existing balance, again, unless you want that automation to affect all 50 tracks, the balance will necessarily be affected because the rises and falls in any given fader due to the automation will affect the balance. But if you do want the automation to affect all 50 tracks, identically, that is where the VCA group comes in. And, if you want to only have the automation affect some subset of the tracks, you can always add a VCA to just that subset of tracks (even if the subset is a single track). I do that all the time for doing trims on already automated subgroups (e.g. background vocals), where I just want to raise or lower the overall balance of that group in the mix but don’t want to mess with existing automation. Of course, it could also be used to automate the track without changing the original fader position. It will, of course, affect the mix balance, just not the original fader position.
There are only two locations where clipping can occur when using a DAW: at the converters and when storing an audio file with a bit resolution that uses an integer number format.
The converters are in your audio interface and there are two types: the analog to digital converter (ADC) and the digital to analog converter (DAC).
The bit resolution can be distinguished between two types: integer and float. If you chose any of the integer types you can have clipping. If you chose the float(ing point) types you won’t have cliping.
The first three options are integers and can lead to clipping, the bottom two are floating points and won’t allow any clipping.
There are other reasons why levels within a project should be kept under control but cliping isn’t one of them.
Also, keeping headroom is an old school style. It can be useful but it is not mandatory.
I have implemented all the suggestions you shared, and they’ve worked wonders for me.
Currently, I’m using a VCA fader for overall volume automation (fade-ins and fade-outs) across all 20 tracks, along with separate volume automation for each individual track. To manage this, I place automation nodes at the very beginning of each track (the song starts at bar 1, I place a node at bar 0).
This has led me to another question. I use the Info Bar to input exact values for automation nodes by referencing the fader position. However, I’ve noticed that the fader resolution becomes compressed (e.g., 0, 5, 10, 15, 20, 30—apologies if I’m not using the correct term). As a result, it becomes difficult to identify or input precise values between ranges like 10–15 or 15–20.
Because of this, it seems almost impossible to place a node at the exact value shown on the fader (such as 12, 13, 14, 16, 17, etc.). I’m wondering if I might be over analyzing this—do these exact values really matter in practice?
My take on this is that it depends. If you are doing work where levels are critical then obviously it matters. I work in sound-to-picture using Nuendo, and if I get a kickback from a QC department and they’re complaining about levels then of course adjust to an exact amount matters.
However, if you’re making music and especially if you’re using dynamics processing then in my opinion it’s going to make much less of a difference. For example; let’s say that you want to adjust your background vocals at the track level for all individual bg vocal tracks. They then all go into a background vocals group track on which you have a compressor on an insert. Is a fraction of a dB going to make a difference? Is even 1dB going to make much of a difference? The extra dB, or fraction of a dB, if it’s above compression threshold, is just going to be a fraction of a fraction coming out of the compressor.
So in my opinion all level changes when working with music should really just be by ear. Does it sound good? Ok then fine. Can you hear a sub-1dB difference in your signal chain? If “yes” then important. If “no” then don’t worry about it.
Generally speaking, I agree that the ears matters most. However, half a dB is definetly noticeable (at least to me) and is sufficient to lift up a chorus from a verse.
One hundreth of a dB (-5,72 vs -5,73) on the other side is something I wouldn’t worry about at all @Vardaan_tewari .
Others have already give good answers on the “do these exact values really matter in practice?” part. I’m personally of the “it depends” school, but typically only concerned with 0.5 dB deltas at the finest level, though I might have occasionally gone as fine as 0.25 dB if it felt like half a dB was too much but the original value wasn’t quite right either in some very specific context. (I do generally enter numbers directly rather than using a mouse for making adjustments, and I’m more commonly working in 1.5, 3, 4.5, or 6 dB increments, though going finer as I progress if needed.)
I’m not understanding the part quoted above, though, unless perhaps you are meaning to have a to have decimal points (or maybe commas in some countries?), for example to mean “ranges like .10-.15 or .15-.20” instead of 10-15 and 10-20.
But, if you are talking about fractions of dBs, is it possible there is a difference between what is showing in a display (especially in a case with narrow console strips) and the underlying value? If so, I think if you select a specific node, the Info line value should show the exact value. My impression is that Cubase’s volume faders have two digits of precision behind the decimal point, but, in some contexts, only .1 dB, or even whole number, values may be displayed.
Got it, thanks! Through these discussions, I just realized (or rather discovered) that we can input precise values for each volume fader using the small window at the bottom of the fader.
Now I’m able to use automation exactly the way I want it to behave.
I’d like to thank everyone for sharing their knowledge and experience. This was my first post in the forum, and the response from the Cubase community has been overwhelming and incredibly supportive.