I installed the 12.0.30 update via the SDA. What can I say…?
It’s exactly the same disaster as before the update!
Unmixing the song with all layers, saxophone and brass first, in high quality, crashes after about 1:45 minutes!
Are you serious?
@Cubase-Erklaerbaer What’s your graphic card, how much VRAM does it have, and how much RAM does your computer have ?
Can you check the total GPU memory GB, and monitor the total GPU memory usage before unmixing, and during unmixing right before it crashes ?
Yes I know - running out of Ram causes the problem. It is obvious! Previous versions of SL did not do this. SL should be able to limit the amount of RAM used when nearing the limit. 8Gb video ram is supposed to be sufficient…
System Requirements(21H2%20or%20higher,Windows%2Dcompatible%20audio%20hardware
System Requirements
Windows
Windows 10 (21H2 or higher), Windows 11 (21H2 or higher)
Intel® Core™ (5th Generation or higher), AMD Ryzen™ (or higher), Qualcomm Snapdragon™ X
8 GB RAM (16 GB recommended)
8 GB of free hard disk space (for temporary files)
DirectX 11 compatible graphics card (DirectX 12 with 8GB VRAM or more recommended for AI processing)
@Cubase-Erklaerbaer You mean the issue of the high VRAM usage? Or the crashing issue if it switches over to CPU rendering due to overspilling? The release notes are a little bit vague .
I just tested the new version myself. Song unmixing with all stems in High Quality takes 17 GB of VRAM for me. A four-stem-unmix of Vocals, Bass, Drums and Other should fit into 10 GB of VRAM.
Here is a picture of release notes with the corresponding detail
I don’t really care where the problem lies anymore!
I paid for the update to version 12 and I want a working software! I don’t want to waste my time solving the problem, that’s the developers’ job!
I expect an update with a solution.
Yeah, that’s what I read as well. I recommend to reduce the number of stems so you stay under 12 GB of VRAM usage and then run a second unmix process on the Other stem to get the remaining stems which were left out in the first pass.
@Sophus This isn’t the solution! I don’t expect a workaround. I expect SL to be able to create all stems in high quality in one pass. Nothing more, nothing less. This was the case in previous versions (before 12.0.20)!
@Cubase-Erklaerbaer@Hooby2 Thanks for reporting, indeed it seems your drivers puts the dedicated VRAM to its hard limits. In a normal situation, the driver is supposed to better balance the load between the VRAM and shared RAM. For instance my RTX 3090 with 24GB always leave 6GB free, preferring to automatically offload to shared RAM before there’s not enough dedicated VRAM :
(graph while unmixing with all stems and quality maxed)
SpectraLayers 12.0.30 consider the total accessible GPU memory (dedicated VRAM+shared RAM) to define how many AI models to load in memory. However it seems that some driver versions don’t fully do the job of offloading dynamically, hitting the hard dedicated VRAM limit, so next build could have a parameter so that SpectraLayers never goes above the dedicated VRAM, instead of the full dedicated+shared space…
@ondre yes there is, see this point of the release note : “Running SpectraLayers in the background no longer causes module processing to slow down (macOS only).”
er, I recall you mentioning that in v12.0.20 too (but it didn’t make any difference). WHEN ARE YOU going to implement GPU support? All your competitors manage to use the GPUs without an issue.……
@ondre As far as I know, RX and Acoustica don’t use GPU acceleration at all, and RipX only support NVIDIA GPU acceleration on Windows, but not AMD, Intel or Snapdragon GPU acceleration… SpectraLayers is actually ahead of them all in that field, and some of its modules actually use Metal acceleration already on mac.
@Robin_Lobel Thanks for the reply. I am using the very latest Nvidia driver 581.29 - and have tried most of the previous versions as well. They all behave the same.
Could you let Nvidia know what is going in in the hope they will fix it? I think a technical description/bug report from you would carry much more weight….