Cubase would own the market if they would develop a true AI vocal booth. LIke Unveil and deverb but even better and built in to either Spectrral Layers or Cubase. If AI would remove reverberant room impact, echoes, and harsh bouncy frequencies from vocals recorded without benefit of isolation, to give an isolated, pristine vocal to add efx to, it would be the ultimate AI application.
The feature is already available, but its name is different, it’s called HI.
Use your brain during recording, that is unbeatable.
Choose the right tools for the task. Don’t need to repair something.
And it doesn’t need to get developed by a software company.
or you can just record the right way…
That’s a ridiculously insulting response. I am a working audio recording engineer, a co-owner with a successful, money-making career spanning nearly four decades. No matter how well one records drums or vocals in their living room(s), short of a lot of money spent for sound treatment they will not have the sound of a good drum or vocal booth. You simply do not know what you’re talking about, or you are intentionally asinine.
Could you please be more specific? What is HI? I have worked professionally in the industry for almost forty years and such an AI “booth” would certainly be the right tool for many vocals and drum recordings recorded under less than ideal circumstances.
Completely agree. This person’s comments are ridiculous and worth ignoring. The response should be more aptly posted to twitter.
It is evident from your ignorant response that you have never worked in television or film which are two very large markets where recording circumstances often are less than ideal.
I am a professional and even I do not always have that luxury. Such an AI would be a godsend to the millions who have neither the resources nor space for drum and vocal booths.
If I were ignorant, I wouldn’t have responded at all.
AI is not the answer to every problem.
There are already tools available.
I know you were responding to someone else, but as the OP I wonder if you actually read my post, or just hurriedly jumped on to respond to it. I wrote that there are some products that do similar things and I use them now when I face this situation, such as cleaning up a field recording, but AI is exactly right for this task, just like it can separate finished tracks back into stems. Hearing and removing room noise, echo, etc. from a vocal is an ideal use of AI. The fact that you appear not to know this does indeed make you ignorant, at least about this topic. That’s not the same as being stupid. When someone is ignorant, they can learn. Stupid goes all the way to the bone.
Dunno, I’d prefer it if the Cubase devs focus on fixing long standing bugs (like the render in place one) before they start in a field where others have years more experience. IZotope has been using “AI” (to use the proper word: machine learning) in their restoration products for years, the developer of Unveil has been using neural networks since the nineties. And if neither of their products work perfectly, that imo shows more that some problems cannot be solved perfectly than that they’re doing it wrong.
And even if against all odds they manage to develop something that can create “pristine vocals“ from a sub par recording, rules the market”, it most likely wouldn’t be for long.
Good points, perhaps. Still, like the holy grail, worth pursuing. Adobe has done it with voiceover and it works almost flawlessly, although still in Beta. Adobe is relatively new to AI, but so much of AI development is open source so all a company really needs are fairly deep pockets and the prescience to hire the right talent. You are likely right that others will follow, and perhaps the current players will land on it before Steinberg does, but someone in research at Steinberg should be doing something about this topic, as whoever cracks it first could at least own the market long enough to change the market share.
Or “shouldn’t” rather.
I don’t think anyone said that. But it can be an answer to some problems. This has already been proven true with recent advances in machine learning.
So? We should stick to the tools of yesteryear because… why?
If better tools can be created, why shouldn’t they?
I wouldn’t say “doing it wrong” but new tech can open up new doors to what previously was not possible.
I listened to a podcast just yesterday. It was from a link shared on these forums where the hosts talk to the sound crew that worked on Peter Jackson’s documentary “Get Back”.
In short, they tried using all the available tools on the market to clean up the available mono recordings done on Nagra machines in 1969 but ended up developing a deep learning system that in the end, was able to isolate instruments and voices from cacophonies on tape. The results are mind blowing.
I can’t recommend this enough.
Might be that there is more possible than I can imagine at the moment, dunno.
It is most likely also a matter of money and resources you throw at it, and how you get your ROI. Which, imho, will be much more likely in the post pro market, as shown by your example. That is not a market where Steinberg is really present, though.
So while I would actually welcome the tools proposed by the OP, I am very sceptic that Steinberg has the resources to excel in that market (seeing that they are not really able to fix even the most glaring bugs in Cubase). Maybe the SpectralLayers team, which imho would be the best product for incorporating such tools anyway.
I wasn’t really advocating that Steinberg should embark in high end, deep learning systems. More as a general showcase of how deep learning can already be utilized to overcome problems that were impossible just a few years ago.
What they did on Get Back will not be available to the public anytime soon. I think it requires a large array of GPUs as well as profound programming skills to be able to operate it. So yeah, a substantial budget of Peter Jackson proportions. It still blows my mind though.
I’ve got more faith in this company than you do. They are backed by Yamaha, for Pete’s sake. If they can’t do it there’s not a company that can. Avid has remedial tools, for the most part, Presonus is not going to do it. Ableton - ditto. Cakewalk is robust but largely resting on its laurels. Logic is limited to Apple’s framework and userbase, although if they developed this, and did it well, it could have a lot of musicians and hobbyist rethinking their next hardware purchase. This idea is already close with Adobe’s new podcast enhancement line.
Has anyone tried the new DeRoom, by Accentize?