Spectra Layer 10 much more accurate

I am using SL from v6 and up for one specific goal and that is removing the lead guitar from specific (guitar) instrumentals. This latest version is a big improvement.
The guitar layer catches most of the guitars altough manual work is still needed. For instance: if the lead guitar uses a swell pedal then parts of the sound are moved to the “Other” layer instead of the “Guitar” layer. They can be recognized and removed easier that in previous versions. The same happens with muted parts.
Still one big wish is that there is a separate layer for a specific lead guitar and one for a rhythm guitar… and I did not mention an acoustic guitar :slight_smile:
Still I want to compliment the development team for this result, a great job.
Cheers,
Jan

3 Likes

Glad you like it !

1 Like

Additionally I would like to add that a feature for separating guitars like has been done with drums (kick, snare and cymbals) would be really nice.

You mean different types of guitars like electric guitar and acoustic guitar?

Yes I do, Lead guitar (electric, acoustic), rhythm guitar (electric, acoustic)
Jan

@JohanDeGreyze

But then it becomes a never-ending cycle of datasets/dictionaries/libraries. Meaning someone later down the line is going to request strings and then datasets/dictionaries/libraries would have to be curated for strings and then theres going to be requests for a specific types of strings (like violins, because violins and cellos are different). Just like lead vocals and background vocals becoming datasets/dictionaries/libraries, I dont believe that is the direction spectralayers should go. It is completely unnecessary and is not efficient and then there’s scenarios (like a Orchestral performance of 20-30 Opera singers with dozens if not hundreds of instruments) where that would completely fail.

The most efficient way to do this would be to redefine unmixing and unmix sources(as a whole). That way you’re Killing 2 birds with one stone and you can unmix whatever you want for any type of scenario. For example, unmixing a triplet of trombones trumpets playing in unison of each other on a note basis and you can compile/merge the parts that fit together or sound best together.

Well I was just referring to drums, for me this is not so different. I am aimed at a small band of four to max. seven persons. I understand that there are some limits to what you intend to do. But that applies to me as user as well. I unmix existing guitar instrumentals to get a decent backing track and such a band has at least three guitarists:

  • a bass guitar player (already available),
  • a lead guitar player,
  • a rhythm guitar player.
    Sometimes the rhythm and the lead guitarist are panned in a different area of the stereo image and then extracting it is no problem even without SL. It are the cases where they are close together or like in mono exactly identical.
    It is is clear to me that you cannot cover all types of guitars as there are too many, but a melody playing instrument and a strumming instrument… Or perhaps a way to automatically create a user defined layer for a user defined instrument following some specific rules?

@JohanDeGreyze

That’s my point. The way these pre-trained models for stems(drums, bass, vocals, guitars) were implemented was the wrong approach for source separation and the wrong direction for spectralayers to be headed in.

I agree that a new model of datasets/dictionaries/libraries of pre-trained models should be created off of sources and not stems. Source being Fundamental + overtones + harmonics defined as a source and then train models based off of that to unmix.