Unmix feature

Why spend months of expensive machine learning - and cutting edge (mind blowing!) AI ? When you can just somebody else’s open source work

looks like somebody took Puma0382 at their word literally !

looks like this is what spectralayers has repackaged as their unmix feature - hence why extra instruments can’t be added - or at least can’t be added by Robin or the SB devs.

makes answers like the one in this thread a little deceptive IMO

I’d rather have Spleeter with a GUI than without one. And I don’t mind paying someone to make the GUI.

Hi dr, let me clarify.
I happen to know the Deezer team personally (we’re both based in Paris), and while most of SL’s unmixing is based on their initial training, the implementation in SL goes beyond the public Spleeter package (it actually use zero lines from this code, removes several of its technical limitations such as sample rate, channels, frequency range, length, bit depth…, and provides more optimized results), and it was initially implemented in SL7 before january. We’ve reviewed together how the training could be improved (or not), and it came apparent that guitar was difficult to discriminate for the reasons I mentioned in the other post, as well as the flute.
As for the other unmixing and restoration processes, they were trained entirely in-house.

Most people probably couldn’t afford to hire Audionamix to do stem separation for them. Getting any kind of stem separation ability on an audio editor that costs only a few hundred dollars seems like a pretty good deal to me.