Since i don’t have VOCALCHAIN, i am not sure of the best video to learn from, using the stock (or maybe an inexpensive version of vocal chain ? available out there)
I have been doing instruments for 40 years, and just started playing with the ACE STUDIO AI application.
Its quite good so i;d like to do some compositions with vocals…
There are sooooo many videos on this subject and at 72 i don’t think i have time to watch all of them , lol.
This is for a home studio and not for professional work, so i just need some mixing/effects help so that my stuff will sound decent enough to play for my friends and family (who dutifully smile when i play my Daddy Music).
I once did a bit and only used reverb and EQ.. and maybe i used compression…
i saw a cubase social with Dom, which showed before and after versions of a vocalist using that one plugin as stated in the question.
i need a recommendation so i can get started quickly and easy.
mixing vocals are done with plugins like EQ, Compressor, Deess, saturator, delay, reverb and others depending on the goal. then what kinda style are you mixing. do you want to sidechain the reverb or delays. there are a lot of way to handle this. Vocalchain is just a all in one of these things.
for me a great vocal starts with level, then compress, then eq, then compress, then de ees, then fx like delay and reverb. then again, do you have multiple takes, harmonies? Vocals can be a simple as 1 mic 1 take or 20 versions with multiple panning positions. there will not be a 1 plugin that works with everything.
I may not need all of that manipulation. I am using an AI voice generator so maybe things like de-essing won’t be necessary…. Maybe not even compression. Probably I need to concern myself with thinks that will make the vocals sit better in the whole mix.
I’ll have to lay the vocal track and see what I need.
Maybe I should start with cubase videos first.
You’d be better off experimenting yourself. Experience always trumps YT videos.
A video will just tell what the controls do, and the sound quality of an average YT video is not the best.
Use extreme settings to get an idea of the controls, then you’ll have a better understanding of how to use them.
Maybe upgrade to Artist so you can use side-chaining, especially on reverb and delay/echo.
Hi,
I agree with what has been said so far.
Answering your initial question, I tend to recommend Chris Selim’s YouTube channel to get started. His vids are well organized and easy to follow because he’s on Cubase.
Have a look
I have zero experience with AI generated vocals but I would expect that a lot of of the editing and a lot of the plugins that we’d use on human vocals would not be necessary.
That is a really interesting question that I never gave a thought so far: Do AI vocals get generated pre-processed? EQ, compressor - is that even necessary or already built-in?
I have barely any experience, too, when it comes to AI generated vocals. From what I can tell by the examples I heard so far, they will need almost the same treatment as regular vox. Maybe even more so because of unwanted artefacts. Again, this is more or less based on assumptions only, not experience.
Yes , I am going to start on my vocals and let you know. I’m ASSUMING that the phoneme sampling is processed in some way shape and form.
I would hate to think that the singers are just being mic’d and left alone. .
That wouldn’t make sense. If /mouth /mic placement changes each time a phoneme is recorded, the resulting lyrical word would be a mess. Not to mention the sampling needed for tremolo, breath , modulation and such. I should try some ‘s’ words and see if need de-essing.
Not sure what kind of artifacts you mean. I had found that some phonemes sound weird. If that happens you can tweak the phonemes. I also found out that extending the note a bit fixes the times where the phoneme is missing something. : ie: ‘parting’ whereby ‘part’ is clear but the ‘ing’ is not pronounced.
Yes, that and also sometimes artifacts that are caused by the algorithms itself (time stretching, pitch correction, formant shifts etc). It still seems to be a good idea to stay close to the original and not to turn a deep male voice into a female soprano voice.
Anyways, I assume that you have to treat AI generated vocals basically the same way you’d treat regular vocals.
Have fun with Chris’ tutorials, he did a great job!