Mixer Prioritisation of Individual Track Compression

Hi
I have sussed all the individual track tweaking and mixing settings that I need for now but I’m puzzled as to how Cubase prioritizes it all in the final mix. It sounds silly but I’m not sure if I’ve got my head round the mixer logic. I have used Mr Franz’s very helpful book from about 10 years ago and made CDs but now I’m trying to perfect all the compression and remix. I’m seeing if I can enhance the quiet bits of vocal and acoustic guitar more than I did before to get more of a wall of sound (as I think you experts call it) and to make the guitar fill the vocal pauses better but allowing the vocal to dominate when present. I get a kind of not too annoying ducking going on where the guitar backs off to let the words be heard but it’s not perfect.
I have several tracks with their own individual compression. Each individual track level gets more evened out when I monitor but the waveform is unprocessed in the display which is to be expected as I have not actually re-processed the waveform (as when gain adjusting or phase reversing or whatever). These tracks go to the mixer and there’s a final overall compressor in the mixer. My technique thus far is to set the vocals slightly higher in the mixer so they dominate then the mixer compression brings the guitar forward in the mix presence as the vocal pauses. I have not used side chains yet but I think they are available in my version.
So…
How does cubase handle this layering of compression? Does it kind of “bounce down” the tracks behind the scenes before bringing them to the mixer? Does it get confused by compression layering? Am I on the right tracks in my technique above or would it be better to use some sort of side chain (which I haven’t investigated yet) to control the guitar level directly? Is it better to somehow bounce down those individual tracks oneself (I’m not sure how to include compression as a reprocess) so they are fixed before going to the mixer? I might also like to apply my conclusive technique to 2x vocal and 2x guitar for a duet.

I hope that my thought process seems logical to someone else and that they may kindly help me to get my head round this bit. Please discuss.

Thank you

I’m not sure what your background is but you’re using terms a bit… incorrectly, in my opinion. So I’m wondering if you’ve read the manual as well as basic material on mixing since that would probably clear up a lot of confusion. But anyway;

From a technical standoint your computer stores audio files. Those audio files when eventually played back gives you a signal. So “waveforms” are representations of those eventually reconstructed signals from entire or parts of audio files. So first of all you should only see a change in the waveform if you actually change the data in the file that the event refers to. If you do an offline process you should see a change for exmple. If you’re using a plugin in an insert then you should not see a change. So what I’m saying here is that your use of “waveform” isn’t necessarily correct.

The “mixer” is the entire (virtual) device including audio tracks, groups, outputs etc. So when you say “These tracks go to the mixer” it doesn’t make sense, because they are a part of the mixer. I’m saying this because the way you wrote it makes it hard to understand just what part of the process you’re looking at. So when you’re talking about “a final compressor in the mixer” I’m guessing you’re really talking about your “Main Mix Output”. All of the outputs and groups are capable of summing signals together. A lot of people put processing at those points, which traditionally have been called buses, or summing buses.

From a practical and aesthetic standpoint I would argue that you’re going about this the wrong way, and am thinking about it the wrong way. You could do things more or less the way you are doing things, but judging from the way you express your problems I don’t think it’s the right way.

Traditionally, and commonly, engineers will process each signal independently so that it sounds good. The mixer will allow you to blend the signals you have processed using faders, and it’ll then all end up on your “master”, or “main” bus (output). Now, if you rely on compression to set the balance between instruments then I’d say you’re going about it the wrong way. It’s one thing to even out a performance, like you said you did to “enhance the quiet bits of vocal”. You can do that on a track using a compressor. But setting the balance throughout a song between the vocal and guitar is often done using volume automation. That’s where you can predictably adjust the balance to taste.

What you need to consider is the order in which things happen based on the routing of the signals. The signal flows from “top to bottom” it you’re looking at the mixer, with what is essentially the audio file feeding the input at the top, and then it flows through inserts 1-6 in order. After #6 you have, by default, your channel fader. After the fader inserts 7-8. So if you put a compressor or limiter on inserts 7-8 then as you increase the level using the fader it will push that compression/limiting harder, and that could or could not be what you want.

Similarly, all of those audio tracks now feed your main out, and on the main out you have your “overall compressor”. So as you increase the level of a track the compressor works harder. It works “against” you in a sense. In my opinion it’ll take skill and experience to mix that way. I think it’s better to leave the output alone and not put any processing on it at first, and instead focus on setting levels using automation. Then, when everything sounds very good, you add master processing.

PS: It just occurred to me - is this recording you’re talking about guitar and vocals only?

Hi and thank you for your thorough reply.

I must correct you on an incorrect assumption. I have read the bits of the manual I need to to get from A to B without being too over-produced and make something considerably better than a demo tape on an analogue 4-track. I did much of my learning about 9 or 10 years ago with my previous basic Cubase LE version which was adequate for my purposes at the time. Now I am using Elements 7 which is quite similar. I do not read the whole manual unless I’m trying out the subject matter of the chapter concerned. My brain doesn’t take it in fully unless I’m trying it out with Cubase at the same time.

I’m sorry for my subjective use of terminology from my point of view as a solo artist or session musician. Some tell me it is slight autism so please bear with me. Without that affliction the patterns I see in a scale would not happen and the bass guitar would never be a lead instrument so I don’t worry about it. I think the discrepancy is also partly historic. In the old days as a punk guitarist anything I did to a track before it got blended or mixed up with the other tracks was pre mix. So that would be things like tuning the guitar, tweaking the settings of my amp, pressing the distortion pedal or switching in the second channel of the Laney. Then it just went to the mixer where the different levels were set. So anything I do to my track now I’m producer as well is still part of the track as far as I’m concerned and yes I guess some of the track effects are now in the “mixer” as Steinberg defines it. So I guess that by mixer I meant the Final Mix Console where all I’m doing is setting the levels after all the individual track stuff is done and dusted. The final overall compressor in the mixer that I refer to is the one on the output bus where all the tracks are mixed down together. I hope that explains where I’m coming from.

Re waveforms, I was trying to explain that the compression I had used thus far was not a audio>process option and just to avoid any doubt as to where I was applying the compression. This was just in case any of you out there had such a thing on your fuller versions of Cubase - ie in the same menu option where you manage fade outs - I don’t know half of what you have at your disposal.

I have come across language or regional vocabulary differences on other forums so I apologise if English is not anyone’s first language. I can try to explain it more precisely. I think language would be a more useful profile field than location.

Yes I am only doing a live solo acoustic guitar/vocal project in this instance. The guitar is two source tracks (Fishman and SE2200A) and the vocal is SE2200A. Each track is duplicated and the duplicated tracks have reverb and perhaps a bit of chorus added. This is my current/old method of balancing the anechoic with the reflected. So I have more than the 2 tracks one might expect.

The added complication that I did not want to muddle my question with (it seems tricky enough as it is) is that there is significant bleed of vocal into the guitar mic and of guitar int the vocal mic (By this I mean that the guitar mic picks up some of the singing and vice versa). The style of playing needs the tracks to be recorded simultaneously as there is an improvised relationship between the guitar and vocal where lyric placement is not necessarily choreographed or fixed to a particular beat in the bar. That co-dependency would be lost if the tracks were laid down at different times and the project loses something subtly beautiful. So I have arrived at the question of my original post prompted by a series of events that are not immediately obvious. When applying compression to the vocal track the quiet bits of guitar in the background of that track are boosted as well as the quieter vocals. I tried to attenuate signal below a certain threshold but the threshold would need to vary to work. I have thought about re-recording the project but find that any sort of bespoke screening I devised to reduce the bleed simply gets in the way and thinking about it being there and trying not to clout it detracts from the core energy of playing. Furthermore I am unhappy about simply erasing the bits of track during the vocal pauses as I can tell that they have a slight contribution to the guitar sound. This is at a different timbre to the intended guitar track. If I try to compensating by copying the associated portion of real guitar and running that on a parallel track during the pauses then there is an annoyingly noticeable change of guitar sound which is due to the differences in source. This cannot be overcome by a blend (fade out/in overlap). I guess it might be possible with a lot of faffing about. Who knows, perhaps the general public would not even notice it any more than that bit of misaligned wallpaper pattern unless they know it’s there.

Thank you for your suggestion regarding automation. Perhaps this is the answer to my predicament. This is a thing that I have not used before although I have been aware of its existence. So I shall read the relevant manual section and digest. I’ll probably get the time to try that over the weekend.

Cheers.

So first of all let me just say that I didn’t mean to criticize you for anything, just clarify what and why I thought you were saying. So if it came off a bit harsh I’m sorry.

Now that you’ve explained more in detail what you’re trying to do I think you’re essentially stuck unfortunately. When I mentioned automation instead of compression that would basically give you more control over the changes between loud and soft levels. So rather than the compressor attacking/releasing sometimes unmusically you could level it yourself with automation and that would then follow your musicality - your aesthetics.

But, since what you are doing is something a lot of people struggle with, meaning guitar and voice recorded simultaneously with bleed into both mics, I’m inclined to say you’re stuck. It’s really a different issue. I agree with you that the bleed from the vocal mic will contribute to the guitar sound, so simply cutting out the vocal mic when you’re not singing isn’t going to work since we would hear the change in tone in the guitar. And automation wouldn’t solve that problem, only smooth out the transitions.

If I were you I’d do two things:

  1. Go on Gearslutz.com and look for an answer. I’m sure someone has had the same experience and hopefully someone has given a good solution to it that I can’t think of.

  2. Either live with the bleed and a lot less compression, or re-record :frowning:

Thank you for the link to Gearslutz.com. I found a vocal/acoustic discussion first hit so that’s promising.
Yes perhaps I will have to embrace the bleed; there are some techniques suggested there. There may be scope for a more optimised mic orientation to re-record with less bleed in this different “studio”. I’ll let my ears rest for a couple of days to cleanse the palate then listen to it afresh. I sometimes find that by listening to something different like a string quartet by Juan Crisóstomo Arriaga helps to adjust the brain to a neutral state.
Onwards and upwards.