One last hope/ suggestion for the future of Cubase. Please please code the video for CUDA. Its so much resource just hanging out on most new computers video cards waiting to be used. I know its very difficult to code for CUDA. But that is like another computer just hanging out inside of my computer. I have 1,500 cores just hanging out. And the speed for audio applications would be unreal. Your buffer speeds could stay super low with huge projects. But I assume that would probably be a plugin developer issue mostly. But if all of the built in VST3’s for Cubase were CUDA and ready to go for other Dev’s trying to code in CUDA you could host the fastest latency plugin standard. I’d have to check what UAD boasts as far as nano seconds but I’m pretty sure CUDA is faster.

But at the very least CUDA for the video engine would be fantastic. It tends to get bogged down with large video files with large projects. If there was a possibility of outsourcing the video concurrently with CPU processing handling the audio engine that would be crazy awesome.


Thank you


Don’t even think about it. SB is a small company they have not got enough resources to get rid of the bugs completely.

I strongly second that!..

But to be honest, how long it took Cubase windows to be free of the master frame gives an idea how long it would take them to care this time around…

They don’t even have (or at least I can’t manage it) 10 fingers touch screen capability so far… Or yet two fingers!

But yeah! Please bring CUDA processing in, so that I wouldn’t get these ridiculous; “CPU Overload/Audio drop out detected” message in Cubase 8 Pro with my i7-3930K CPU and 64 GB Ram in Windows 8.1 Pro 64 bit… You know…

wow incredible idea! i don’t know any technical stuff about how hard it is, or even possible at all. but working on Adobe Apps with Cuda drivers installed, i feel a huge difference in performance. if we can have this on Cubase that would be just incredible! +1

Here is what reverb creator SEAN Costello of Valhalla DSP has to say on the subject:

I think the odds of me porting to CUDA are about zero.

The reason for this: GPUs are geared towards massively parallel processing. This is the sort of thing that you see in images, which are essentially NxM arrays of data. Image processing tends to be feedforward, and not reliant on previous outputs of the filters.

Most of the audio processes I work on are based on delays, feedback processes and filters, that sort of thing. This doesn’t work well in massively parallel systems. I process audio in parallel whenever I can get away with it, for efficiency, but the fact is that delay lines of different lengths are inherently difficult to parallelize. Feedback doesn’t work well with massively parallel systems, unless you have feedback with pretty big blocks, and that tends to lose a lot of the cool aspects of feedback.

Convolution processing (including Nebula) works well with massively parallel processing. Beyond that, I could see GPUs used for other things that work well with lots of parallelism, like modal synthesis. For general audio applications, I don’t think that GPUs will prove that useful. I wouldn’t mind being proven wrong, as there is a LOT of power in the GPU.

So unless it’s convolution based stuff, probably not. Not that I wouldn’t like more DSP, just that understanding that it has severe limitations makes it a bit easier to understand why it’s not happening.

I remember reading this and who am I to argue… But the convolution processing with specially Nebula is the main desire to use CUDA in the first place at least for me… Even though I have 64GB ram and relatively speaking a fast 6xi7 processor, those convolution EQs and reverbs and what not are like huge monsters, they devour resources like the blackhole swallows planets… So CUDA would be like a blessing, a heavenly boost… :slight_smile:

But that is down to the Nebula team to code, has nothing to do with Steinberg.

In fact when talking about performance increases I would assume most of the performance usage is down to plugins on a project, and if these plugins are third party then it is down to the plugin coders to code for CUDA.

You do have a point there… Code has to be written in both sides… Having said that, I don’t think writing for CUDA would be the heavy load for guys doing the hula hoop with the convolution coding though…