ASIO. Audio drops outs

If “we” start down the road of dedicated accelerator chips then we’re in UAD-2 territory, which people have complained about for years and years. The complaints have been that the DSP (SHARC) doesn’t keep up with general CPU advances. If you integrate it into the CPU you’re still going to have the same problem because that specific audio processing circuitry takes up space from other processing that could be done, so I would bet advances would be greater for the non-audio portion of the CPU which means we’d still be complaining about ‘lagging’ audio DSP. And if you don’t integrate it then the problem is that you’re dealing with another party, like UA or whatever.

Also, to my knowledge most integrated A/V processing solutions in CPU packages, whether for desktop or mobile, is going to be very specific to solve specific problems. It may be created for specific encode/decode of both video formats and perhaps spatial audio. But that’s not what we need because we need to be able to run everything from a bland compressor to an analog-circuit-modeled compressor to a convolution reverb to a VSTi to a denoiser to a dereverb unit etc. etc… We need great general processing units mainly.

If anything the one thing I could see “dethrone” UAD-2 is an open standard to port plugins to run on a GPU, but I wouldn’t bet money on that happening soon.

What a great in depth read
it seems there needs to be some open standard,
video content creation gets all the love i guess for the YouTubers
seems to me something new needs to happen with audio software /hardware
latency should be a thing of the past
even with a crazy amount going on. We have the technology but no will

Maybe AI will sort it out for us lol

1 Like