I have sung lines at different times, at different volumes unintentionally, but I want to have all of the verses and choruses sound more consistent; they all need to be the same volume.
Is this possible? Or do i have to go through all of the lines of the song and sync them up?
Well that is what Compressors are designed to do - among other things. However if there are big level changes between sections the Compressor will respond differently to each section so it is best get everything roughly the same before compressing. You can do that by using Normalize which will adjust the level based on the loudest sample. Or you could use Loudness Normalization which bases the adjustment on the audioâs Loudness Level. Most likely Loudness Normalization is a better fit for what you describe. If you havenât really used Normalization you might want to create a test Project to explore how it works on different settings & audio. Donât Normalize too hot, leave yourself some headroom.
A lot of folks would Normalize the Audio and then Render the entire Track so itâs all in one Audio file.
You can also use clip gain to adjust the volume of the individual takes/clips. Even if you end up using a compressor, evening out clip gain will help the compressor do a better job.
What I would do is ride the automation on the fader on that track.
(Or, more likely, listen to it in sections, and then manually enter normalization with the mouse/curve editor.)
That really shouldnât take that long â maybe 30 minutes for a 3 minute track, tops. (A LUFS monitor helps, btw)
It also doesnât need to be perfect, if you buss the clips to a vocal compressor. You could set one up with a long release, so it works more like an automatic gain control than a dynamics shaper, if you want.
What I find when I do takes in different situations, is that the different angle to the microphone will lead to different characteristics of the take, and thatâs much harder to control for â Eq can get you some of the way there, but itâs not gonna be perfect.
Then again, I like autotune and ovox and harmonizer and ⌠at that point, those differences are not audible
An Audio Event in Cubase always refers to a section of an underlying Audio file (that âsectionâ can even be the entire Audio File). When you have multiple Audio Events in a Project each of those Events could be based on different Audio Files, or they could all be based on a single Audio File, or some combination of both.
When processing Audio, like with Normalizing, it is going to process the underlying Audio File. The message is telling you that multiple Audio Events are based on this specific Audio File. If you process this file it will impact all of these Audio Events which you may or may not want to happen. So it gives you the option to apply the processing to only this single Audio Event. The way it does this is to make a copy (aka New Version) of the Audio File which only that specific Event will reference and apply the processing to this new copy leaving the other Events unmodified.
Since you most likely want all of the vocals to be Normalized you can probably skip the âNew Versionâ but itâs always worth pausing a moment to consider the decision & its implications.
Can I come crawling back out of my hole and mention that Cubase Elements does not have Loudness Normalization? I know I said it is a Nuendo-only feature, which is wrong, but Elements only has peak normalization.
Still better than nothing.