SL12 Pro for Splitting Lead/Rhythm Guitar / Advice and Sourcing Video Tutorials

Considering buying SL12 Pro (MacOS) and own Logic Pro. My main goal is to get lead/rhythm guitars and vocals/background separated from each other, which Logic Pro doesn’t do. I haven’t found a stem separator which does a good enough job with the above types of splits, so hoped SL12 Pro would allow me to fine tune without too much hassle once I’ve learned the ropes.

Surprised there have been no in depth and from the top video tutorials put out by Steinberg for SL12 Pro. Some have pointed to paid third party video options. It’s not an inexpensive app and before purchasing I want to understand how well I can achieve my goals, but if having to scavenge for bits and pieces of info on how to use this thing is what I’m in for, then no thanks. The manual is surprisingly brief.

If anyone knows of any good free video options or perhaps better, can advise as to whether I’m barking up the wrong tree with SL12 and my needs, please share. Maybe a better option exists elsewhere for my use case? Thanks

AFAIK, SpectraLayers Pro 12 doesn’t even suggest it can separate lead and rhythm guitars. It does separate guitars from other instruments (though I’ve found that, in practice, it may end up lumping other instruments that perhaps are processed like guitars – say a distorted synth – in with the guitars layer). Maybe the learn instrument feature could be used in this context to try to do more than the unmix song possibility does on its own?

It does nominally separate lead and background vocals, but my limited attempts on that front were pretty unsuccessful with vocal harmonies ending up in the lead vocal layer. My suspicion is it may mostly rely on levels of ambience in background vocals and or EQ being different from those in lead vocals to make the distinction, and maybe they were too close to each other in the few cases I attempted.

In general, there is a trial version, though, so it’s best to give that a try to see if it achieves the results you are targeting and/or is sufficiently useful in other areas to make it worth the purchase.

1 Like

I have had zero success with this in SLP

Agreed, I have no success when attempting separating guitar and/ or keys layers…unmixing just isn’t there yet…we’ve talked a lot about regeneration for that on this forum

Heck, I have a lot of material where the drummer is using a Roland Octapad and the keyboard sounds and drum samples in some sound sets remain unmixed after running Unmix Song/ Drums…we end up with an “other” layer with a massive amount of manual unmixing to attempt…emphasis upon attempt because, really, regeneration is the only way I foresee these types of musical layering achieving clean stems.

We will see…

yep, @chase_g as @rickpaul suggests, put some effort into the trial and investigate

Appreciate your feedback @rickpaul and @ctreitzell, any suggestions on video tutorials for someone who doesn’t have the time to pour through the manual or learn through osmosis? Thx

1 Like

All due respect, you must put the time in to userstand what you are attempting to do.

The only way you’ll be able to separate lead guitar from other guitars is dependent on panning. if the mix has the lead guitar in the centre, while other guitars are panned left and right, you’ll be able to use a free VST plugin like Voxengo MSED to separate the mono/centre (sum) signal from the stereo compontnt of the mix (sides/difference). You dont need Spectralayers (or another unmixing app) until you have the audio files containing manually separated guitar parts in the mix (using panning as described above). Then you can separately unmix each of those mixes to derive the guitar stems.

Until 3rd parties spend a lot of time and money developing models which can separate multiple guitars, your needs cannot be met by unmixing models.
Development of an unmixing model requires lots of final mixes plus the full original multitracks as used in the mixes. That’s the only way the model can reverse engineer “before and after”, which is what you want the model to do.
Spectralayers uses imported models made by 3rd parties.

+1000

reading the spectral transforms is learned; so hopefully you’ll know how to do that already; if not, I suggest studying up before you activate the trial

…adjusting the spectrogram display (focus/ FFT size) to see whatever audio you need to correlate to whatever audio you are hearing is reasonable effort in and of itself

beyond that, to develop techniques to target selections and do whatever you want to do with those selections is a puzzle until it’s not…when it is no longer some form of magic, is when you have learned…you have to put in the time…or just see if running modules works for you is also an option

without testing a trial, you won’t be able to explain how much you grasp or do not grasp

this SLP manual is frankly rather sparse.

You can watch the Steinberg video series on SL…when doing so, ignore the version specific video titles…you can learn a lot there

yet don’t be fooled by whatever separations you see in those videos. If you specifically want to split guitar/ keys/ strings layers from music, SLP is not so capable as pointed out. You need to test that for yourself, and running modules any user can do quite easily…you just open some audio and run modules on said audio. Modules are an “automated” task. Manual separation is a very time consuming affair which requires understanding the SLP tools and concepts.

@digitaldiggo knows his audio!

i didnt mean to sound rude or brutal - it’s hard to describe complex tasks without using lots of text, particularly if the user lacks direct experience. SL is not a simple tool and all of the available spectral editing apps (not just SL) require some knowledge of FFT-based workflows for good results.

Prior to AI-based stem exports, it wasnt possible to do many of the tasks we now take for granted. This is why I refer to using tried and true techniques to help prepare your audio for optimal results when you finally generate the guitar stems.
If your audio is a typical rock guitar-based track, you can use linear phase EQ in combination with Mid (Sum) / Side (Difference) processing to create two files - a Sum (mono) file and a Difference (stereo) file. The Sum file should contain more of the lead guitar, the Difference file should contain more of the rhythm guitars.

Best to try to Unmix the guitars from the mono(sum) export first. You can mix this unmixed guitar file with the original mix. To increase the centre guitar, add some of the exported file to the mix. To decrease the centre guitar, flip the phase. Adjust the relative level against the mix to obtain the desired centre guitar level.

Hopefully that method improves the relative volume between the centre guitar and side guitars.

If you still need to adjust the side guitars after that, you can use the side/difference export, but bear in mind you will also change the original stereo soundstage.
NOTE: Spectralayers isnt very good at Unmixing stereo Difference files - I get best results with LaLaL.ai
The Unmixed side channel won’t be perfect stereo difference data after unmixing. However, it will give you a pretty good rendering of the side guitars which you can then mix with the original track (or subtract from the original track by inverting phase) by adjusting relative volume. Even though the overall stereo field will change, you should be able to adjust the side guitars to some extent (maybe enough to achieve your goal) without badly damaging the mix.
The key message in this situation is to not expect miracles and count small wins as wins - sometimes a very small adjustment can be enough to save the day. Do as little damage as possible to the original file to achieve your result. Minimize harm every step of the way.
This is one reason why experience really helps, but it’s the kind of experience you have to learn the hard way (i.e. by doing it a lot) - that’s why videos about complex workflows like this are rare.

1 Like

As others have indicated, SpectraLayers is a complex beast, and I certainly don’t pretend to be an expert on it, especially when it comes to any complex editing.

My sense is that some of the others who’ve responded have some good ideas on at least trying to work the lead/rhythm guitars scenario, whether or not SpectraLayers comes into play and to what degree if so.

The Unmix Song thing (i.e. which does the first level of stem separation in SpectraLayers Pro) is really pretty straightforward, and the Unmix Instrument, which lets you “train” (to a degree) SpectraLayers to separate out an instrument that isn’t covered by Unmix Song could be something to try if the lead guitar and rhythm guitar sounds are sufficiently different (perhaps it could help if the rhythm guitars were acoustic and you had a distorted lead guitar???) – probably operating on the guitars stem, rather than the full mix (if it will be helpful at all for this task). You can find basic videos for both of those on the page with SpectraLayers 12 new features:

The feature that can (possibly) separate lead and backing vocals is called Unmix Chorus. Once you’ve separated the vocals from the rest of the stems, you run the Unmix Chorus on the vocals layer. Here’s a video that demonstrates that (and it actually works pretty well in the video, unlike in my attempts – the link goes straight to that section):

As for learning more about SpectraLayers in general, Steinberg has a page with some tutorial-type videos at:

I’ve also gone through some courses on Groove3.com (I have their all-access pass, and their videos are often useful for coming up to speed with music-related applications, so, in my book, worth the something like $99 a year for the all their video and non-video courses plus access to lots of online Hal Leonard sheet music-type books). The courses that turn up in relation to SpectraLayers (not all of which are relevant) are at:

https://www.groove3.com/search?filter=&keyword=spectralayers

The SpectraLayers Explained and SpectraLayers 12 Update Explained courses would be especially relevant, and the Spectralayers 11 Update Explained would be the one that would be most likely to go into detail on the Unmix Chorus.

Good luck.

1 Like

I’ve never had any good results unmixing multiple vocals with this unmix chorus module…that said, I have not tried unmix chorus a lot; just initial testing while trialling SLP12…same with unmix instrument…I was unable to get unmix instrument to do anything worthwhile for me

manual selection work is the only thing I’ve felt I can rely upon in SLP…and that is usually after running unmix noisy speech…I do not do much music unmixing as compared to performing NR on location recordings…unmix noisy speech as a starting point is a game changer for my film work

This is fascinating stuff, appreciate all the feedback very much! Pointing to videos is especially helpful thanks.

It sounds like consensus is other apps are preferred to SL12Pro for automated unmixing if focused on multi-guitar and multi-vocal separation where there isn’t much to differentiate them tonally. But SL also sounds hard to beat for manual fine tuning. I’ll still give some of your suggested unmixing algos in SL a shot before manual editing.

One area touched on by @rickpaul is the idea of manually editing, but doing it in a way that trains SL. This perked my ears:

When you initially define an instrument/vocal for separation by selecting a section, it looks like it’s possible to add/subtract from an instruments sonic definition. I haven’t figured this out yet in the trial. How well does this tend to work vs having to just manually separate? For example, Malcolm and Angus Young have a very similar guitar tone. But if in one song I can manually build a definition of Malcolm’s guitar, and apply it to other songs—where adding/subtracting to/from the guitar definition will likely still be needed—it could save a ton of work separating whole albums.

Using your advice, I’ll likely start with another unmixer like lalal.ai or moises first. Act 1: Separate guitar tracks from greater mix Act 2: attempt to split the guitars from each other Act 3: Manual SL editing until guitars are sufficiently split. Hoping I can avoid summing to mono but it is what it is. Also hope the use of different tools doesn’t cause loss of audio along the way.

Just to be clear, while I mentioned the Unmix Instrument capability (based on remembering it from the SpectraLayers 12 new features), I haven’t even tried it to date. (I see, however, that some others here have mentioned experiences with it.)

Also, when I mentioned above that I’m no SpectraLayers expert, that may have been “British understatement” (albeit from an American). :rofl: I’m decidedly a newbie with SpectraLayers, and the whole spectral editing thing is something that is still pretty confusing to me, and I’ve had only very minor successes with it (and not in an unmixing context). But I also haven’t needed to use SpectraLayers all that much for my typical work (I’m mainly recording my own songs in Cubase).

I picked up SpectraLayers Pro 11 on a past annual sale (as a crossgrade from RX Standard, where the combination of the crossgrade price and the sale discount made it as close to an impulse buy as it would ever be likely to get) mainly because I was interested in seeing if I could use it to improve the audio in live piano/vocal performance videos made in a noisy bar environment. In that environment, it was frequently the case that the volume of conversations could overwhelm the actual musical performance in the cellphone camera videos. Other combinations of tools I’d tried using (including some of DaVinci Resolve’s noise reduction and RX tools) either didn’t help much or too adversely affected the musical content. However, I think I tried a few things in SpectraLayers, including the Unmix Crowd feature (don’t recally if that was V11 or V12) with limited (or no) success, then I just got involved in various other projects and haven’t gotten back to experimenting on that front.

When SpectraLayers 12 came along, I thought some of the song unmixing enhancements (especially drums and “chorus”) might be useful when I get to trying to do some restoration work on some old song demos (mostly digitized from cassette “masters”). But I haven’t gotten around to that yet, and my few song unmix experiments have been with some well-known artists’ recordings, and really just to try and help me get a better idea of what is going on in the arrangements and sounds at a deeper level than my ears can get from just the full mix. That is where I learned, for example, that the unmix chorus (at least on the 3 recordings I tried) was still keeping the vocal harmonies with the lead vocal, distorted synths were sometimes in the guitar tracks, pianos often weren’t in the piano layer, etc.

But there is good information out there in videos and such to at least get an idea on how to use these features, whether they end up working well in a specific context or not.

Training? that is just using unmix instrument…which was new in SLP12

as I said, I tried it a fair bit; it did not function in my tests…real or synthetic instruments, I couldn’t get unmix instrument to learn which instrument was which…I did log that testing here on the forum…not sure where

we’ve talked about the Unmix Multiple Voices module as well; which has been around since I started using SLP with SL10. I have never gotten that module to be able to identify and separate the different contributors’ speech; manual is hard enough to separate two or more humans talking simultaneously.

yeah, no, I couldn’t get it to identify different instruments

I have a lot of hours into SLP over the past 2 years…daily use

and I don’t bother with trying to unmix music in SLP

the best music stuff I have done in SLP is (on my own personally produced music); I don’t fool around with major release material.

1-separate (98%) vocals from some tracks: SLP works well (unmix song)

2- separate drums from music: rock music, yo! “real” instruments and no synths. I’m talking vocals (harmonies), 5 pc drum kit, bass, two electric guitars. Unmix Song

SLP 12 drum separation was better than SLP 11

Vocals separated quite well (I did not attempt to separate the harmonies)

Bass: in this material the fretless bass player is the star and bits of bass were in all the other layers. Separating out I-IV-V root notes, maybe works alright, but the effort to separate out the fretless bass in my tracks would have taken months manually. I just can’t see the point in it at this time. I am busy working on something else, other than music, even so, I still think the separation is not there yet for music.

Guitars: they are married; any reverb and fx go to the “other” layer and bits of electric guitar go to guitar and other layer. Separating them out seems a no go. I’d rather re-record.

I also tested the Unmix Drums…well, again, this works with vanilla 4 or 5 pc to some degree…But really, it’s just not capable yet

So I do not focus upon unmixing music, as I have said on this forum since I got here.

I need SLP for dialog/ ambience on location NR and for that task, SLP is fantastic.

and they were for me as well…until I tried to carry out the claims of those videos. Hey, like @digitaldiggo said, you just can’t know if SLP is going to do what you want until you get your hands dirty…watching “how to” videos is certainly how I started, but really, experience has been the only thing that delivered results for me. I’d start on something and think I could be done in a couple days. Two or three weeks later and then I’d finish. Then come back to those earlier jobs a year later and it’s time to start over! Yes, I had to redo many jobs, cuz I learned so much.

But again, I was working every day, all day…it was a lot of teething.

@chase_g MVSEP - Music & Voice Separation has a Lead Guitar/Rhythm Guitar splitting model.

2 Likes

yes thanks @Sophus. I tried it the other day and was surprised how well it did splitting dual electric guitars. I’m planning to test it against Moises and lalal.ai.

1 Like

This is not like the training which is done when creating unmixing models, which requires hundreds or thousands of projects to achieve, and even more to do it really well across a variety of genres and time periods. By projects, I’m referring to examples of finished mixes plus the original unmixed stems, so the model can learn “before” and “after”, which is then reverse engineered by the model. The effective quality and range of the model directly depends on the projects used to create it.

The unmixing models in SL are created by 3rd parties, not by Steinberg. The models are freely available for download, which is how the SL developer obtains the models.

NOTE: Unmixing is not a spectral process and is unrelated to spectral editing. The advantages of using SL for unmixing stems are the bulk unmixing (ie multiple stems unmixed in sequence as a single pass) and the layering which can then be spectrally edited.

1 Like

@digitaldiggo I was probably misintepreting the unmix instrument module, new for SL12 Pro, where it allows you to create custom instruments for unmixing. I thought (apparently wrongly) that in addition to the up to 10 seconds of source material selected initially, that you could continue to add (or perhaps subtract) source material to further define the custom instrument. Now that I look at the manual, it doesn’t look like you can modify the instrument after the initial up to 10 secs selected. Maybe it will work this way in the future, would sure be nice IMHO.

1 Like

that function is a very, very long way off from resembling a typical unmixing model, which requires a lot of investment in training, using lots of sample projects (as I described in my previous post)

1 Like

online service…hmmmm

are people feeding their IP into this? or are the majority working on major release music, I wonder?

Do you mean feeding their IP into this for training?

To my understanding MVSep is part of a community which creates their own stem splitting models, which can be time consuming and costly for an individual because it can cost multiple hundred or even thousand dollars to rent the servers for training. Otherwise it wouldn’t be possible to do this in s timely manner. The community creates their own training sets as well.

Some models are even offered for free and can be used locally in tools like Ultimate Vocal Remover. Even SpectraLayers uses some of the same models. They are often available on sites like GitHub or Huggingface.

There is also a large document describing the newest developments available here Instrumental, vocal & other stems separation & mix/master guide - UVR/MDX/Demucs/GSEP & others - Google Docs