Hello,
After reading your comment, I ran a few tests this morning, selecting 1 minute, then 2, then 3 minutes, etc.
The application worked perfectly on a 9-minute track, provided I selected the entire track. However, when I tried again without selecting the entire track, it crashed.
It’s quite strange, as in previous versions it wasn’t necessary to use the selection when you wanted to de-mix an entire track.
It could be useful to check CPU and GPU usage on your Mac while running “Unmix Song”. If the process really takes hours, one possible explanation is that SpectraLayers isn’t engaging the GPU/ANE at all and is falling back to CPU-only.
If you’re not sure how to check this:
-
Open Activity Monitor (Applications > Utilities).
-
In the Window menu, select “GPU History” – a small window will appear showing GPU load.
-
Keep Activity Monitor itself open on the CPU tab to see how much of the CPU is being used.
Run “Unmix Song” and watch both graphs. If CPU is pegged close to 100% while GPU remains almost idle, then SpectraLayers is not using the GPU on your system, which would explain the extremely long processing times.
I’ve also found a thread worth reading:
My understanding is that the GPU is not used on the Mac; that’s a PC-only thing. I recall Robin blaming Apple for this despite all of the competitive unmixing programs being able to use the GPU.
To me this sounds like the GPU issue on MacOS is now solved with SLP 12?
No, the GPU is ONLY used on Windows; the Mac GPU remains UNUSED in SP12. The GPU IS used on all other competing unmix programs………
Read ROBINS POST blaming Apple (again, how everyone else can use it is puzzling if Apple are at fault) Steinberg Hardware > CC121
That was written last year, long before SLP 12 came out.
Why can’t you just have a look at the Activity Monitor - then we know more?
How many times does the same point need to be made for it to register? GPU SUPPORT IS ONLY ON WINDOWS, there is NO GPU support on the MAC………If you used the search function you would see more posts about this…..
I am tuning out here, don’t need that kind of aggressive undertone when trying to help.
If GPU support had been implemented in SL12 I believe Steinberg would have told so when it was released. It would have been somewhat sensational and a very strong marketing factor.
So Robin’s statement from November ’24 has, I’m afraid, the same validity now as then.
Your answer has made the most sense so far. Yes, my card has 12gb so you’re most likely correct if your card is hitting 17gb of usage that it’s probably just my card crashing out when it runs out of memory.
I have it set to “auto” in the preferences instead of just CPU or GPU so I wish it would do what you say and just switch to CPU when the GPU gets maxxed out.
As I said before, CPU only speed isn’t fantastic but 7 1/2 minutes isn’t the end of the world, either.
I did take a look at the RTX A6000! Man - that looks like a heck of a card! Are you getting really fast numbers when unmixing with that card enabled?
When my GPU hits the 8 GB memory limit, it starts using shared memory from system. With everything maxed out it uses 8 GB + 13 GB. This could be some kind of memory allocation issue, since my system doesn´t crash
Thank you!
So that settles the question if currently the GPU is used or not in SLP 12.
From what I understand, the difficulty with GPU acceleration on Apple Silicon is less about Apple “blocking” it and more about the maturity of the developer tools. Apple provides Core ML and Metal Performance Shaders, and their own apps (Final Cut, Logic) are well optimized for those.
For third-party developers it seems harder: most AI models are created in PyTorch or TensorFlow, and converting them to Core ML doesn’t always work because not all operations are supported. Even when it does, performance and quality can vary. On Windows, NVIDIA’s CUDA/TensorRT pipeline is much more mature, so developers get GPU acceleration almost “for free”.
So my impression is: apps like Resolve have invested heavily to adapt their models to Apple’s frameworks, while others (like SpectraLayers) may not be able to do that yet. Until the Core ML/Metal toolchain improves, a lot of these processes will stay CPU-bound on the Mac.
Thank you, Robert.
this is a downgrade ,no problems with spectralayers 12.0.01
it toolk 1.50 sec to extrackt a song from 4 min , now it crashes……..
i want my olfd version back!!!
windows 11
32 gb ddr 5 ram
intel core ultra 7 265 k 20c 20t
geforce rtx 5060 ti python III 16Gb
yes it just crash and shut down , my nvidea
The bigger question is de-railing orderly development of SL due to incorporation of unsupported (and continually evolving) 3rd party unmixing models. Demixing is not spectral editing, which should be the core mission, particularly with regard to stability.
yes same here
The question has been answered several times, yet you chose to argue against what is common and public knowledge. If using the Apple GPU for unmixing (and other tasks) were so difficult, why can every other competitive product, including the free Demucs GUI, use the GPU and complete the process in one twentieth of the time it takes for the same track in SL12?
Robin wasted development time pandering to the handful of Windows users who prefer to run an ARM based machine. That time would have been better spent addressing the disparity between the Mac and PC versions first. While the Mac installed user base is smaller than the x86 Windows user base, its relative percentage of the audio and video market is much larger. Windows on ARM is, at best, a folly.
Oh, this I did not know
I am definitely de-mixing multiple voices manually in SL for rebalancing and spectrogram editing is how I’m doing that. I assumed the unmixing was AI models carrying out the same tasks, yet much faster

