Atmos Rendering and Buffer Size

I keep seeing instructions to set the buffer size to 512 samples when running the Dolby Atmos renderer plugin.

Are there any conditions under which it is possible to lower the buffer size below 512 (if you use a HEDT CPU for instance) or will the renderer simply not work on other settings?

Forgive the possibly annoying question, but why would you want it to be lower?

Lower latency for running an orchestral template direct into Atmos without separating the composition/mock-up and mixing process into two discrete stages.

1 Like

Wow, I have an i9 10900x cpu and full orchestral mockups with a little electronics included get me way up there on the cpu meter, and you are tossing in Atmos on top of that? What you got under the hood, man?!!! You are indeed stuck at 512 for atmos. My mixes are always better finalized after composing anyway, but time constraints of course can come into play. I’m still not convinced for me personally that I save time composing/mixing all at once.

I would like to have the complete pipeline unobstructed from inception to post.
The question was mostly a hypothetical however.
I am thinking about possibilities with next gen Threadrippers due out this year (with the caveat that they fix their core latency issues and play nice with pro audio).

I would highly suggest that Steinberg takes steps to lower the buffer size for the internal Atmos renderer.

The fastest RTL reported for an audio interface at 512 is just over 23ms:

That is just not fast enough for real-time music composition direct to Atmos.

I know this is old but I just came across it and find it interesting…

So, is the orchestra listening in headphones in 25+ individual isolation booths ?lol
Im trying to picture the situation you’re doing this in where this all matters. Not mocking. I love orchestral recording and Im actually a little sad from time to time that our studio doesn’t have the space to build such a large live/acoustic room as I was trained in. Im just intrigued to hear what these circumstances are.

Ive never tracked an orchestra that was listened through headphones so latency was never an issue. Maybe you need it to sync to video but, if that’s the case, you’d definitely want to do some post work.
The only thing I can see it becoming anbut, issue is through live broadcasting in Atmos, but even then, youd just delay the video to sync with the audio; which you’d need to do anyway in U.S. TV broadcasting, just in case a cellist randomly decides to drop her bow and flop her nipples out or messes up and accidentally yells an obscenity on air, or some other random event the network wouldn’t want aired.

What did you end up doing? Hows the threadripper working out for you? Ive been debating whether to build our next PC here at Deadly Mix Studios with a thread ripper but I never hear of anyone using one and have been concerned about stability/compatibility issues.

Ah, there is a misunderstanding here.
The Nuendo template referred to here is in reference to doing mock-ups (virtual orchestration) using VSTis, not tracking a live studio orchestral performance. The latency concerns are for the responsiveness of MIDI controllers, which at a buffer of 512 make VSTis much too sluggish to perform anything with refinement.

512 is also the standard for Pro Tools if you don’t have an external renderer. Also we are limited to 7.1.4.
Cheers.

512 is also the locked buffer size for Logic Pro (at 44/48. 1024 for 88/96).

If you use the Dolby Atmos Renderer, you can set the buffer to whatever you want.