I have Spectralayers Pro 9.0.20. I am using the Layers → Unmix stems function.
The manual says “Each instrument has a Sensitivity parameter to adjust how the artificial intelligence perform the separation. The end result is a Non-destructive unmix if that option is checked. If it is unchecked, the sensitivity parameter affect each instrument without rebalancing the others to compensate.”
I can live with the single nouns taking plural verbs, but I find this explanation almost meaningless. Could someone provide a fuller and clearer explanation please of what the slider does in both non-destructive and destructive mode? Thanks.
I’m not sure what it says about me, but I think I may actually understand what this is trying to say.
Unmixing is an attempt to separate a combined audio signal into components. When applied strictly, it means that, if you decide to include some part of the audio in one of the separated layers, you would not also want that partial signal to be in any other layer.
However, since audio unmixing is an imperfect science, some layers may sound better, if you’re not so “pure” in your unmixing approach.
For example, there may be parts of the kick drum and the bass audio signals which are effectively so intertwined, that current algorithms can’t properly take them apart into layers. In those cases, you would want to allow some part of the original audio signal to appear in both unmixed layers for bass and for drum.
The checkbox allows for that “duplication” of some parts of the signal to be enabled or disabled, because sometimes the one approach may give you a more natural end result and at other times the other approach may give you a more natural end result. The checkbox allows you some control over that.
In practical terms I would probably start an unmixing project by getting the layers as good as possible with the pure separation approach.
And then revisit each layer in a second listening round and see if I can make it sound even better with the less pure approach.
The unmix algorithms have a range of different sounds they will perceive as a drum, a vocal, or whatever. My impression is that they use AI to teach it what a bass sounds like by playing a bunch bass parts for it to learn on. Anyway, it has criteria to decide Yes/No is this sound a bass or not? The Sensitivity control lets you adjust the dividing point between Yes & No. So if you find that your bass Layer also has some Kick on it you might reduce the Sensitivity so it doesn’t include the Kick. Or increase the Sensitivity if you want it to categorize additional sound as also being part of the bass Layer.
The Sensitivity Control is kind of like a bouncer at a nightclub that decides who gets to go inside.
Your explanation chimes with a Sound On Sound article I have just found, about using SL1 to isolate vocals in Cubase 11. The key part is “Essentially, positive Sensitivity values put more audio into the vocal stem but may include more fragments of other mix elements too, while negative Sensitivity settings are less likely to place other instruments in the vocal stem but can leave more vocal trace elements in the ‘everything else’ stem.”
Presumably that is only true if you have Non-destructive checked. If you have destructive checked, then anything removed from a layer (which is adjustable via that layer’s Sensitivity setting) is not assigned to another layer, and is simply removed altogether.