When mixing the main vocal in each song, I want to go through each vocal phrase and fine tune the level to make sure that important syllables and words are emphasised appropriately. I prefer to do this, rather that rely on compression which doesnt always work very well.
What are the best options for doing this in Cubase 5?
Say there is a single word in the middle of a vocal phrase that needs to be louder. Previously I have selected the word, split the word off as a separate event and then used the blue square at the top of the event to drag up or down to change the volume.
I have tried using the “Event envelope” which sounds promising but the problem is when you click in the event to create and edit point, this attenuates the volume depending on the vertical position of the point. So, if you click right at the top of the event, this relates to 0dB, but there doesnt seem to be any way of making the envelope BOOST the volume; ie you cant push the event above the 0dB line like you can for the overall event volume (blue square as described above).
How do other experienced users do what I’m trying to do?
I think apening the automation window, setting it to volume, and then using the pencil tool to draw a curve so that it goes above 0dB might help, if I’m understanding your question correctly … ?
I often use that method too… making crossfades across the split points (and because, apart from the level, the data is still the same, I can , if necessary, make a very broad crossfade, making the level change less abrupt).
I particularly use that for the “most famous of all vocal performance faults”… the missing last consonant
That allows me to then use automation for a *musical" riding of the vocal level, rather than a “corrective” one.
But, obviously, there’s no right or wrong… our tools should let us do whatever we want .
if you can’t do a retake, try to figure out what the main problem is, sounds like the main problem is volume peaks. " Vocal Rider" is a great plugin for that. A good practice is to use a good preamp, with a bit of comp and limiting, slight. Get the signal in good, less headache later.
@ Alexis: If I’ve understood you correctly, you are suggesting using the channel volume slider automation. The problem with that it that this changes the volume AFTER the audio has been processed with inserts (compression and eq). I’m trying to level out the performance as though I am “riding the fader during recording” so that I start with a well balance and intellegible vocal event BEFORE subjecting it to processing (compression and EQ etc). In this way I am much less dependant on compression to correct the vocal and thus able to use the compression creatively, use less of it, or not at all!
@ Paul: I take your point but we are trying to create magic and accentuate emotions, so I’m trying to make thing sound as good as I am able. That doesn’t mean as loud and as artificial sounding as possible. Its a bit like if someone is writing a poem with a pen and you cant read a critical word because the handwriting is poor or the pen was playing up; the message might not get across. Smoothing the vocal before processing with compression and eq means I can probably make the track sound more engaging.
@ Vic: Yes, it’s a great technique and I agree that the automation is best left for the macrodynamics of the song (a few dBs boost in a chorus etc). Is there any way to add edit points to the split off event or is one limited to boosting the whole event and using cross fades? Also, how do you set up your crossfades at the beginning and end of the short event, when there isn’t any overlap? I’m trying out Cubase so I’m no expert.
Well, that is indeed what the “event envelopes”, as you call them, are there for. Used in conjunction with splitting the events (so you’d have separate envelopes ), that should get around the problem you have been experiencing, if I understand you correctly?
Also, how do you set up your crossfades at the beginning and end of the short event, when there isn’t any overlap? I’m trying out Cubase so I’m no expert.
Are you referring to the event that you have just split? If so, then the “overlap” is in fact still there, “virtually” (for example, even after having made the split, you can still extend the boundaries (until you reach the limit of the original clip). That is quite a different situation from having two different clips butted together at their edges, where indeed there is no overlap, therefore nothing to crossfade .
I agree with Paul. If the talent can’t pull off a performance then they’re not talent. Just another sign of the world going to hell in a handbasket, IMHO.
(Even excellent vocal performances can often be enhanced in clarity and comprehension of diction by a little manual tweaking… it is actually far less invasive than slapping certain plugins on it . Just my 2€ )
I hear you vic but to me that takes human out of the equasion. On the best songs out there you can’t understand all the lyrics. That’s why they print them on the sleeves. As for clarity, I’ll take a scratched LP over a CD any day.
Even with an excellent performance, it has always been standard procedure to polish a vocal track. There are plenty of techniques that don’t change the human equation or even vocal texture. This is in contemporary or old school engineering.
“The best songs” is a subjective or personal appreciation. It doesn’t always mean it has the best engineering. Nowadays, it is in one’s best interest to make a song as universally pleasing/articulate as possible to sell the song. It’s up to the live performance to try to keep up. This is especially important to lyric freaks. When you’re driving, listening to the song on the radio, the lyrics on a sleeve aren’t going to help you.
It’s one thing to “polish” a track with eq/reverb/whatever, but when you start changing the original vocal track because the “talent” can’t sing properly, then artistic integrity goes out the window ( IMHO ) and we get to where we are nowadays.
This is getting seriously off topic from the thread starter so I will refrain from any more comments.
That´s why you need multitrack drum quantizing and drum replacement Plugins nowadays, First you quantize the hell out of a drummer´s performance, the you replace all the sounds. That way you can keep the “feel” of the original performance, contrary to programming a drum track…
With all respect - even George Martin and his engineers were known to “fly in” and “fly out” Paul and John’s vocals … not just for musical effect, but even in the song proper, a few words here, a few words there (maybe Paul giggled at the beginning of a line, so they overdubbed) … would you say these kinds of things ruined the artistic integrity of their song?
All of this artificial processing is nothing more than a way to put no-talent, well-endowed dingbats on stage to make someone money. Like I said, the world’s going to hell in a handbasket. Standards have been shot to hedoublehockeysticks. Musicianship is a forgotten term.
Thanks for all the interesting input to this thread.
My take on this is that even if you are taking a purist stance, there are some legitimate step that can be taken to capture the natural sound source. So, it may be that the vocalist has made a perfect performance but that he/she just swayed a few inches away from the microphone and the signal level has dropped for that brief moment. Remember with the inverse square law you only have to move a little to have a relative large effect on the SPL reaching the microphone. There are then three options;
You leave the take sounding naff ie you haven’t been able to capture the true performance.
You use compression whilst recording or during mixing (this is the sort of thing compressors were designed for)
You “correct” the take using judicious gain riding, thus offering the opportunity to use less/no compression.
If there are any other options that I’ve overlooked, please let me know but as far as I’m aware you have to make one of these three choices.
Just a couple of other points:
A) Remember that your eyes and ears are able to cope with a very wide range of intensities (from candle light to bright sunshine and from hearing a pin drop in a quiet room to a loud rock band). Cameras and microphones don’t have this dynamic range and so compression of the dynamic range is inevitable to create an intelligible end product. Then consider the environment in which the music will be reproduced. If it is in a quiet room you can use less compression, but if the music is played back in a car, then the loud sections may end up too loud and the quieter sections inaudible.
B) All reproduction of a sound source is artificial and there are many decisions that have to be made, even if you are a fanatic purist! Try recording a solo piano for example with a view to creating a stereo recording; are you going to record it from the players perspective or the audiences perspective? If from the audiences perspective, close to the piano, or far away? How wide will the stereo image be? Are you going to use a straight stereo technique or a Mid/Side technique that might replay better or mono playback? Will you use a high pass filter to get rid of rumble etc?
Modern music production has used signal processors like EQ and compression over many decades and I think most raw recordings sound pretty awful!
Of course movement around the mic depends on the starting distance, move 1" when working at 1" is doubling, moving 1" when working at 12" isn’t as bad
Anyway most real life vocal would use a variety of techniques combined, compression on the way in, compression on the way out and gain riding and or clip tweaking.