I’m curious if there’s such thing as an “AI accelerator card” that could be used to boost the performace of the AI operations in SpectraLayers (Pro 9)?
Been trying to figure out any ways I can improve performance when working on large projects. Currently waiting about 45 mins to an hour for audio to split into layers for a live album I’m working on. If there’s any performance I can get from any kind of PCI-e add-in card, I’d throw my money at it immediately!
I am also interested in this.
The “AI accelerator card” in this case already exists in the form of graphics cards aimed at the gaming industry, particularly those from NVidia. This is because of an open-source project that uses the massive computing power of these cards for other purposes, in this case machine learning (ML). SpectraLayers makes use of the same technology to perform seperations.
One of the problems of putting a gaming graphics card in a DAW is that the drivers do not prioritise audio, and have been known to cause glitching in playback. For that reason, I’ve avoided discrete graphics altogether, with good results; but now I need performance …
What we both need to know is, are there any particular recommended ards that SpectraLayers can use to speed up processes?
Right now SL9 only takes advantage of the CPU - so what will make a difference is a CPU with a lot of cores. But moving forward, for future SL versions a safe bet is to have a NVIDIA GPU to accelerate AI calculations, or an Apple Silicon mac.
I have a high end all AMD dgpu/cpu and I also have a high end mobile AMD (what is referred to as “Advantage edition”) dgpu/cpu, if you mind me asking what are the advantages (besides ray tracing and calculations) does Nvidia has over AMD and why are you more willing to support Nvidia over AMD?
AI calculations depends on frameworks such as PyTorch, TensorFlow, ONNX Runtime, etc and all these frameworks have better support for the NVIDIA hardware for 2 reasons :
-The CUDA programming language, making it easier to develop complex and efficient AI calculations
-Tensor Cores which you can find only on NVIDIA RTX cards, accelerating specifically AI calculations
That being said, work is being done to have some level of AMD acceleration as well in those frameworks, so hopefully future versions of SL may also take advantage of AMD hardware.
… but PyTorch can also be installed to use CUDA, correct? Would SL9 be able to take advantage of an Nvidia card by replacing the PyTorch CPU library with the CUDA version?
(Of course it’s a hack, but I seem to remember an early version of SL under Sony required certain graphics capabilities).
I’m assuming that would require a rewrite of Spectralayers or break large parts of code entirely. Also I believe the op/you are confusing A.I. Acceleration for Optimization. Yes Spectralayers could benefit from A.I. Acceleration however I believe the problem here is poor optimization. I believe Spectralayers poor optimization is caused by “Legacy Code” and overall I believe performance could improve by doing heavy optimization.
@MrSoundman Correct, but it’s almost 2GB big for CUDA support, that’s why the CUDA version is not included with SL. Unfortunately there’s no easy way to hack GPU support externally (this would also require the AI models to be moved explicitly to GPU for calculation, which cannot be achieved by a simple DLL swap).
Thanks Robin, it’s good to know, so that we can make informed decisions for hardware upgrades!
I’ve tried to avoid discrete graphics cards in recent years, especially Nvidia because of gaming drivers causing audio glitches (which I believe is now largely fixed with the “studio” drivers), so I wondering if I should start saving up for e.g. an Ampere A100 PCIe, mere $40,000 …
Definitely get some improvement using a dedicated GPU, but I think this is simply the rendering of the spectral graphics isn’t having to be done by the CPU and so CPU is freed to do the spectral processing itself.
@Sam_Hocking correct, OpenGL is used for all the spectral rendering rountines. Processing is still done on CPU.