I’m about to invest in a discrete GPU for use with SL10 and other AI-powered apps.
Which hardware components are most valuable to have in a graphics card? CUDA cores, tensor cores or RT cores?
Does SL10 use all of those or some?
Are there memory size concerns?
If you want to play safe, go indeed with a NVIDIA RTX that will easily work with multiple AI apps.
Regarding cores:
-CUDA cores are the most important as they will provide acceleration for all kind of processes
-Tensor Cores are dedicated to specific AI computations, but it’s still a bit early for pro apps to take advantage of this, it’s mostly used by data scientists when training models, or video games using DLSS rendering. SL might use them in the future, but right now it mostly relies on CUDA cores when using a NVIDIA GPU. That being said, any recent NVIDIA RTX will have plenty of Tensor Cores anyway.
-RT Cores are purely for 3D rendering effects (RT stands for RayTracing)
As a guideline, look what NVIDIA card provides the best CUDA cores perfs (looking at benchmarks, not just the raw number of cores).
I’m in more or less the same position, except I have avoided discrete gaming cards until now due to driver issues such as high DPC latencies, although apparently NVIDIA have addressed this with “studio” drivers now.
Any tips on finding such benchmarks, or reviews that can be trusted? I found this for example, but as SpectraLayers users I don’t think we will be wanting to train AI models, just execute existing ones on the GPU.
Looking at this benchmark and sorting by “inference score”, the 2080 Ti look to be the affordable winner.
It looks like most of the inference scores above will use the tensor cores (FP16 calculation).
For tests that only use CUDA core, you can check results from ComputeMark on https://www.notebookcheck.net/ for instance, although it’s not as clear to navigate.
That being said yes a 2080 Ti is nice and affordable.
As a simple recommendation, x080 cards from NVIDIA are pretty powerful, you can just decide with your budget between a 2080, 3080 or a 4080.
FWIW, I more or less decided on one of the new 4070 Super cards, a Gigabyte one, that seems to be powerful, efficient, has somewhat future-proof features and fits my case!
I used to connect my “vintage” A&H ZED R16 mixing board with Firewire as well, until there were no more FW PCIe cards that would run in my MoBo/Win 11 combo anymore…
I now connect the board with ADAT, and was also looking at a RayDAT solution back then, but settled on using a RME Digiface USB, which is really fantastic!
The latency might be a smidgen higher than going PCIe for the ADAT, but I regularly have drummers play MIDI into Superior Drummer 3 that monitor out over the ADAT, and with 32 buffers and 48kHz, they never complain!
Maybe that’s an idea? To go USB as well and save a bunch of PCIe slots?
Anyway, my PSU (and wallet… ) can’t handle the Ti version, but the 4070 Super will be great to get my feet wet with some AI and otherwise acceleration.
Also on my shopping list, if it comes to a new build!
Back on topic, some of the non-NVIDIA cards I’ve consider are:
RX 7600 XT 16GB
Intel Arc A770 16GB
Advantech EAI-3100 with Intel Arc A370M (mobile chip)
The Intel ARC range is interesting because of the commitment Intel have demonstrated to optimising their drivers; the disadvantages of all non-NVIDIA products at the moment is that they require products to adapt to using them. SpectraLayers leads in this respect.
From my perspective and from what I am seeing, a lot of people are going amd. The ROG ally and these mini one x players are not just popular, I’m actually seeing people buying them and using them.
I can confirm that the ROG Ally is not all hype, people are actually using it as an all in one pc. I’m seeing (people like college students) not go the traditional route of building a pc and go with something like THIS
There’s definitely a shift happening and I can definitely tell you people are not building pc’s like how they used to 10 years ago. Amd is definitely winning the handheld market. I’m saying this from experience, from the people I’ve seen with laptops and all in one systems.
Again the timing of this is kind of eerie because when this topic was posted I actually reached out to amd (myself) and messaged one of the top engineers there and asked them why they didn’t have a solution/equivalent to Cuda and a day later this is in the news (which leads me to believe they had Cuda support all along but it was kept away in secret)
An interesting card to consider is the recently-released NVIDIA RTX A2000 ADA, for the following reasons:
uses only 70W (so no connectors melting!)
has 16GB GDDR6 RAM
uses only 2 slots
low profile
could soon be within the $500 budget
The downsides are:
only 128-bit RAM bus
4 x Mini-DisplayPorts, but no HDMI
This should be absolutely fine for a DAW workstation with no gaming aspirations, and offer good compatibility with both SpectraLayers 10+ as well as other current AI-infused audio software that is still bound to CUDA.
I’m not sure if this is a terribly good value. My card has 2.5x more Cuda cores, resulting in three times the performance, according to Nvidia’s datasheets! I got it for 650€…
The current chip generation is also extremely energy efficient, and with some undervolting I was able to get my card to draw 170W under full load with maybe 2-3% performance loss!
And from what I’m gathering, the melting issue mostly affects 80- and 90-series cards using adapters or badly seated cables.
So, yeah, I’m really happy with the machine as is, and when Robin gets around to utilizing the tensor cores as well, things will start to get really interesting…
But thanks for the heads-up anyway!