That’s not too unexpected if everything is loud from the beginning. But there’s a lot of things that play a part in this.
If you’ve already recorded your vocals hot, you will not have much headroom to boost them from the fader. You can use compression, limiting, the channel strip’s maximizer for a quick test, or you can boost the events’ volume before any of these if the recording wasn’t hot after all, you can use sidechain compression so that the rest of the band ducks a little, there are many things you could try.
The difficult thing (at least for me) is to preserve the original color while trying to make it louder.
So, it can be better to turn everything down before recording vocals? I dont understand this, a signal has a defined dynamic range, I dont go very high on the meter when I record the vocals, its not close to red? The dynamic range should be the same for instruments and vocals?
Not necessarily. It depends also by what you mean by instruments. Most vstis defaults I find too loud. Instruments like guitars, basses etc, it depends on their signal chain, if they already pass through a multieffects unit or if they go directly to the interface, etc.
And, most people I know, they like to hear their thing louder when they are recording. The monitor I mean, be it vocals, woodwinds, brass, or anything else. So, in my opinion it’s not really strange to lower everything else when recording. Or create a special cue mix to monitor through, with favorable balance for the performer. Or use the Listen bus and play with the listen dim. As long as I can capture the timbre I want and a good performance, I don’t care.
Mixing comes later, things can be buried or brought to the front.
Things like Hans Zimmer Strings or Epich Choir, VST-s. I still dont get it, it should be possible to get same sound strength from any signal up to when it distorts? Also I have a ref. track that is eagles hotelC, thats pure audio and I can crank this much higher then the vocals without it getteing distorted?
I think I get it now, the vocals are more like one frequency, the ref track are many instruments and that gives the impression of more db-s even if the level is the same as the vocals. The vocals are also alone, the vst-instruments are several sound sources so they drench the vocals. Thanx!
I have some serious bad sound from vocals, I have better voice now mand I use variaudio instead of antares autotune and its OK. But, I hear my vocals in both left and right speaker, vocals normaly should be inte center? Its very strange. The input is just “mono” from my UR22, Before I had a ssl2+ interface and I used mono left as input. I am doing something wrong, I compare with commercial recordings and vocals should (often) be in the middle of the sound image. For me its not, it sounds like two mono sources comming freom the left or right speaker.
Here is screendumps. I everything normal?
Well, sort of. I wonder why it says Mono under speakers in the second picture. Can you upload the outputs tab picture too?
Maybe I am comparing my mix with very advanced stuff that takes many days to do.
Outputs look good. I don’t know what you’re currently comparing to what, but I suspect that commercial recordings, as easy as they might sound, usually involve a shitload of work and expertise from the best on the field, so it’s good to compare, but no need to despair.
Personally, I’m not a sound engineer so I can’t help you (responsibly) with tips for good vocals. I just keep hacking away at my own recordings, improving one element at a time. (Provided that I can even hear what needs to be done.)
Thanx. Yes, Iisten to people on youtube and they make it seem easy, its probably hundreds of hours on their recordings/mixes. I trust you know more than me, so…
I am a beginner both producer/composer/musician/mixer so I have A LOT to learn But it sounds good enough to make it fun!