Hello; I would like to add ReSing to the extension function in Cubase. I have added it to the VST3 folder and can add it using the track Insert feature. I would like to add it to Extensions so I can use it like Melodyne. Thanks in advance.
I haven’t tried ReSing yet (planning to sometime in the not-too-distant future now that the free version is available to check out), but the FAQs page on it suggests it should work with ARA2 in Cubase 12 and later.
I’m not sure why you mention adding it to the VST3 folder – shouldn’t the installation program put it wherever is needed to enable all its functionality (assuming it is installed on a supported platform)? I’d think Cubase should automatically detect it for the extensions list like it does other ARA2 plugins. (While I haven’t installed ReSing, I do use a number of others, including SpectraLayers, WaveLab, Melodyne, RePitch, VocAlign, and Sync Vx. FWIW, I’m on Windows 11 and Cubase Pro 15.)
You need two things to get ARA happening in Cubase:
- ARA or ARA 2 capable plugins/software.
- Cubase Artist or Pro.
Baring that, ReSing should just work as an ARA process with just being installed.
One other thought (perhaps obvious, but just in case). An ARA extension has to apply to existing audio. Are you using it either on a track with audio already (to use it at the track level) or by selecting one or more clips via a right click on the selected clips (which is what I generally prefer with ARA extensions)?
Of course, the general idea of ReSing is to transform your vocal (or some singer’s vocal) into another singer’s vocal or instrument. But that requires audio to transform. If you were trying to use it with a virtual instrument (e.g. Omnivocal), you’d need to first render the relevant clips to audio and use ReSing as an extension on those.
I have Cubase Artist 12 and the ReSing product installed itself. I have Melodyne and SpectraLayers as extensions which work as intended. I fear it will be visible only as a VST3 plugin and I will be copying and pasting tracks.
Resing as an ARA2 extension shows up for me.
But to be honest, it doesnt do much other than processing the audio, which you NEED to wait for, then you can play back the new voice. Both VST and ARA function exactly the same. Any time you make a change in the voice, you need to reprocess the whole file. There is really is not much advantage with the ARA version.
My experience with ReSing so far, even withouth having the ability to use it via ARA in Cubase, is I cannot get it to work. It freezes right after it loads and that’s on the latest version which supposedly fixed issues related to starting the AI engine which has to phone home to work even though all processing is done locally.
Lots of interruptions, but I finally was at a breaking point, or maybe more like a procrastination point, and decided to install ReSing Free and give it a quick try.
The test case I used was one where I’d done a moody vocal on a classic Christmas song. The vocal left a lot to be desired, and this application isn’t one I’d likely use it for (I mainly see my key application being for getting different voices to layer with my own background vocals, not for lead vocals). However, I suppose this actually made it a more challenging test for the software.
My test was in Cubase Pro 15.0.10, and I used the software as an ARA extension at the track level.
First key point: I initially tried processing without having first run the optimization, and I could tell it was hitting my system hard based on the heavy-duty fan noise and taking a long time to process the audio. I actually abandoned that maybe halfway through and ran the optimization. After that, there was no overactive fan noise, and the processing was much quicker. FWIW, my display adapter is NVidia GeForce RTX 3060 with 12 GB VRAM. I did notice the documentation explicitly suggests NVidia for the voice modeler component, saying it could take days instead of hours for that process if you don’t have an NVidia card with CUDA support. But it at least looks like using the video card makes a difference here, as well, though perhaps it wouldn’t necessarily be specific to NVidia cards?
My first test was just to use the male voice with default settings. I hadn’t paid enough attention to the needed steps, but eventually figured out I had to actually press the Process button to get it to make the audio change. As someone else mentioned above, that is also necessary pretty much every time you change something meaningful in the parameters.
The voice was pretty raspy, and “okay” – not something I’d likely want to use here, and it also got me wondering how useful it will eventually be for my more typical background vocal uses when I’ll more likely want a clearer voice. There are lots of manipulations that can be done to tailor the sound, but that raspy character seemed to come through in all my tests.
Another test was to try the female vocal, using the transpose feature to pitch it up an octave. That was clearly not an ideal range for the female voice, but the octave I’d sung in was even less ideal. The female voice was clearer in pretty much all the variations I tried, but one thing I noticed was that it was translating some noises (breaths, I think) to notes at times – I had not noticed a similar thing in the male voice.
Just for giggles, or perhaps something that could have been valid, I tried the saxophone model, again with a few different parameter changes. That was decidedly underwhelming for the most part, and there were some weird noises – kind of like some electronic artifacts, again probably where it was processing breaths. I didn’t bother trying the bass guitar since it wouldn’t have been applicable for this use case.
I also played around a bit with some of the presets, mostly with the sax and male voice, and I did note that those can make a difference in results (e.g. the Blues preset on the sax got better results than the default sax preset).
I guess my bottom line for the moment (i.e. after maybe 30-45 minutes playing around) is that I can’t say it wowed me. I will definitely experiment with it for my main use case at some point when I have a project it will fit. Perhaps it will be more useful in that context, and it will definitely be easier to use than OmniVocal in that context since it can just process my voice rather than having to recreate what I’d be singing with MIDI.
A key question in my mind is whether having access to different voices (i.e. from the higher-end models) might have made a significant difference, even in my test cases. My voice can be challenging to process due to breathiness, and this particular case was one where I was really milking that side of things due to the moodiness I was going for. I wouldn’t be doing that in my typical BGVs case. I’ll definitely want to satisfy myself that I can get useful results with the free version before considering a paid upgrade.