Hi All.
This post is mainly a documenting of effectiveness of using various spectralayers tools in a recent location recoding
I recently undertook a challenging session here in New Orleans. A band wanted to record and video a live performance in a very tiny historical space in the French Quarter—about 10x15 ft . After a little preproduction, and substantial work on mic selection, positioning and quieting drums etc, we were underway and getting very impressive results given the challenges.
Instrumentation: drums, keyboardist(playing bass and a nord for piano/organ), cello, electric guitar, and a vocalist—also contributing additional keys and acoustic guitar/ukelele. Everyone playing and monitoring at very low levels. No amplified vocals
I used a combination of unmix and hand editing in spectralayers to then get isolated tracks as much as possible but also worked hard to make bleed work. The areas that did not work as well were the cello and acoustic guitar/uke—also cymbal decays tended to suffer. I had high hopes for unmix instruments in this setting as I did isolated ‘training’ recordings per song of cello and guitar/uke. This did not provide useable results. Also tried debleed and various other unmix modules. I mainly ended up unmixing song and pulling out the elements that were bleeding into the respective mics—drums, vocals keys etc, which worked well enough to keep things punchy. Also split eq by eventide is a useful tool for dealing with muddy sustain tones. I do understand there are of course limitations with regards to the noise floor of bleed in using these tools
As a failsafe, I also recorded di tracks of both the cello, acoustic guitar and uke. I very much did not want to use these tracks but of course, sometimes you have to.
My question/feature request is…is there a way to use the di track as an additional training element to aid in assisting the unmix instrument module. As the only training element the di track was not helpful. I also tried the cast/mold module to poor effect. My thought is that these di recordings do give precise time based information of the attack, sustain, decay and release of the instrument that you are trying to unmix. It’s just the frequency and character of the instrument is decidedly not present in the direct tracks.
When looking at the spectrogram you can definitely see the harmonic structure of both the di and mic’d instrument are basically the same it’s just the di track is missing the more natural tone of the mic.
It is of course possible that I am missing a process that would somehow achieve the results that I’m after. But if not, this seems like a possible feature within the program.
Thanks for any assistance/advice anyone might have.
Ryan
2 Likes
The key thing to remember is most listeners won’t be as fussy as you about the results - they usually listen to the song itself more than the intricacies of the mix.
Some quick tips which might be useful:
Unmixing is not spectral in nature, so you can also use other tools outside of SL to improve results. LaLaL.AI offers good acoustic guitar and strings models, for example. Also good at removing ambience/bleed from vocals. You can then import those stems into SL if you want.
-
Restem by WaveMachineLabs is excellent for unmixing drums - it includes good gates for each drum and (if you have Drumagog) you can use Gog files to replace drums instead of using gates on the actual drums
-
MAutoalign from Melda is a great plugin for tweaking each drum mic and can work with phase on a spectral level.
-
Get the vocals right first - that’s what most listeners focus on. Instruments get second priority. You may need to be persuasive on this with the musicians.
-
The so-called “training” in SL is rudimentary at best and often a waste of time. It’s nothing like training an AI model.
-
Working with varied phase rotation of mix elements can be useful. Get a trial for MAAT’s RSPhaseShifter.
Hope that helps - it’s a tough gig, but very rewarding if you get the results you and the band are chasing.
1 Like
Thanks for taking the time to reply. I do understood that fixating on minor details is not always something that the listener engages in, however, the client certainly tends to and ultimately this is who I’m trying to make happy.
Ultimately, I’m happy with the results that we’ve achieved (vocals came out particularly well) but am also hoping that the developers lurk around and see fit to improve the product in ways that perhaps they may not have thought of. What with forests and trees etc.
My main interest in improving this particular product, is in the area of tracking performing musicians in less than ideal environments and situations. I ultimately want to ‘preserve’ the actual performance in a given moment and location that might otherwise be compromised due to the limitations or aspirations of the particular endeavor.
I will check out some of your suggestions and again thank you for replying.
Ry
1 Like
Hi Ryan
It’s great to hear you’re happy with the results you achieved! I’ve worked a lot with acoustic instruments and share your aspirations re preserving performances as accurately as possible.
In terms of SL, unmixing live recordings will always be limited by the training of the models, which isnt done by the SL developer. He uses third party models which he then tweaks for using in SL.
The core job of SL is spectral editing, however the app has forked into unmixing because the combination of umixing with spectral editing can be very useful, particularly with functions such as moving segments from one stem to another.
We are only recently seeing unmixing models become available for acoustic instruments, largely as a result of training work done by third parties, typically involving considerable expense. For optimal results, the model developers need lots of good source material for training, including original multitracks and final mixes/masters for each source recording. The focus has been more towards mainstream pop, but I hope we’ll see more of the roots genres in the coming years.
1 Like
fantastic advice IMO 
as was all the rest of the suggestions
one of the things I’ve yet found a reliable remedy to from live, on location music recordings is dealing with overload unwanted distortion…which can often be the case even when taking great precautions in recording set up