Not to send anyone down a rabbit hole, but

This morning, having restarted my DAW, when I played an already completed SL mixdown and added all the various instruments, my GPU usage was 17% max. So - here it seems that if you switch to CPU from GPU and do not restart your PC, then SLPro will still be using your GPU as if you are doing a mixdown, even for simply replaying any part of the mixdown.

I think this behavior needs improvement. The GPU shouldn’t be taxing the graphics card after you have completed a mixdown. I can understand 17%, but not (as in my case) 50%+. As a comparison, RX has similar graphics and it uses 10% GPU power during playback.

Robin, is there a reason the switch to CPU from GPU is not instanious? Can this be changed?

2 Likes

It is instantaneous, as soon as you choose CPU all modules will use CPU for processing - but SL still uses the GPU for fast spectrogram display, which RX doesn’t (try moving the spectrogram or zooming in/out in RX, it’s pretty slow to refresh the display) - actually RX GPU usage should 0% since they don’t use the GPU at all, so some other process on your system might be using the GPU.
Likewise if you don’t move, play or have moving selection in SL you should see a usage close to 0%.

1 Like

I also have RX 11 and during playback, it uses more than SL 11, and when both are stopped, I see almost 0% GPU usage! In comparision, RX 11 is quite laggy and it’s possible to see the display being built up, while SL 11 is much smoother, but uses more GPU when zooming etc.

@mr.roos If you’ve had other graphics cards previously in that PC, it might be worth using a utility like Display Driver Uninstaller to completely clean out all remnants of previous drivers before re-installing that latest one for your card.

As an aside, I have found that just having the Steinberg Activation Manager open uses between 15% and 30% GPU!

2 Likes

MrSoundman, yes, I’ve seen a video regarding the DDU app. Unfortunately the link you posted requiers a $10 monthly fee so I’m not into that. Too, if I can download the latest driver from AMD, and confirm via Device Manager that it is installed on the video card - and not an older driver - I don’t see the concern? Do you disagree wit this logic?

1 Like

DDU is freely downloadable and you can optionally make a donation, and you don’t even have to install the portable version. You’d be amazed at the detritis that drivers leave behind.

I did find this, thanks, Mr.Soundman. I also read the ‘run’ notes and it said ‘use this if you are installing a diff brand card’, which makes sense, and ‘if you are having problems installing the newest drivers for your new card’, which also makes sense. Since I am up and running right now, and the old and new video card is AMD, and, I had no problems installing the new driver (which is a very diff driver than the old one - Driver Version 23.19.12 for the RX570, vs. 30.0.13018.2 for the WX7100 PRO - I really think I am in the clear without using the DDU app. I will keep my eye on it, though.

I also want to add that Robin seems very on target when he says to use a 6G min card. I say this because while running the Unmix process on ‘High’, my GPU usage is 3.9G/8G. BUT… when I revert to the Extreme setting, oddly enough, the GPU usage states 3.4G/8G. Does this represent some type of ‘throttle down’ situation? It begs the question, why isn’t the full 8G being used? What am I missing?

I wish the Apple computers were somewhere connected to this rabbit hole of grafix cards…

Could one hope that the new Sequoia OS opens up for using GPU routines with AI related tasks in SpectraLayers, @Robin_Lobel ?

I think last spring I’ve read something here about a coming MacOS with possibilities in this respect. Are we there now?

1 Like

Apple remained pretty vague last time I asked in august…
That being said I have some new technology to explore for SL12 on both the Windows and macOS side, we’ll see how that turns out.

3 Likes

Thanks for answering!
Here’s hoping!

1 Like

To add:

I believe Nvidia at some point will have to bridge over their exclusive architecture(Cuda) to other architectures like Arm Architectures. I also believe it’s going to take some heavy persuasive extremely passive aggressiveness to force Nvidia to do so. For example, when enough Game developers start getting request to port over their games natively to the ARM architecture, they are going to have to do it (or else customers wont support them). If PC buyers stop buying Nvidia cards and start buying ARM chips(especially for 1/4 of the price), Nvidia is going to have a very difficult time selling those GPU’s.

I believe the intention for both Apple chips and ARM (on windows) is to passive aggressively force developers to port over their apps natively to their platform. I believe bridging and porting is the future of where things are headed. It’s going to take developers persuading companies like NVIDIA to conply and telling Nvidia “hey can you start natively porting over your architecture to the ARM architecture, our customers are not buying your GPU’s and instead opting for ARM chips”.