Now that I feel like I’m FINALLY getting a grip on how ATMOS works (which means I’m actually getting the expected results from my actions), It seems this whole thing boils down to LIMITED RESOURCE MGMT and I’ve got a bunch of ROOKIE QUESTIONS.
Since you have to start with a Bed and a 7.1.4 group (22 locked ID’s right there) just to fire this thing up, you’re now down to 106 remaining objects to work with. Once you factor in all the ridiculously complicated routing strategies required to get the audio into and through the Renderer, you’re left with virtually nothing to actually assigned to the audio. So, my question is how are you managing your resources to accomplish a MUSIC MIX?
Is it better to use 7.1.4 groups with mono/stereo signals assigned and use them in set blocks sent to the renderer or will it be more efficient to send multiple mono/stereo to 7.1.4 channels directly to the renderer? I know the latter would be far easier to work with, but you’d be out of resources in roughly 8 tracks!
Whether you use purpose built ATMOS capable effects like Cinematic Reverbs or the VSL Rooms or build your own with Group Channels populated with multiple 5.1 or stereo FX, you’re still taking a big hit on resources just for FX routing regardless of going to the bed or as objects. So, is there a way to conserve those resources?
I’m noticing a big difference in volume between bed signals and objects. I’m not sure why that is or even if this is normal for the format. I only know that I can make a source come through the bed very loud with no effort and I have to work a lot to pump up the signal volume to get the same result coming though the bed.
Finally, is there ANY way to monitor objects without having the renderer being open? Besides being distracting to look over at your usual led indicator and seeing NO SIGNAL, while clearly hearing it, the renderer takes up a lot of “real estate” on an already crowded screen? I don’t suppose there’s a way I could get that page to show up on a tablet or something, huh? I’m already on a 3 screen array. I’d be hard pressed to find room for a 4th, let alone the video card to support it.
For music mixes (!), I find it much more predictable, resource efficient, and in line with all tried-and-tested workflows to skip the “moving objects” part of Atmos altogether and stick with a pure, static 5.1.4 or 7.1.4 output setup with power or ambisonics panning. I use fixed Objects with zero width as virtual speakers and call it a day.
Working in regular surround is infinitely easier, that’s for sure! But I can hear a really improved effect between the ATMOS panning and the Surround Panning. Whatever that difference is also present in the static placement, especially for assigning reflections for ambience.
So, I want to at least see if it’s possible to get a relatively streamlined workflow practice that’s viable (ie: doesn’t make me think I should be in therapy for doing this! )
As I wrote above: An ADM with fixed, zero-width Objects used as virtual speakers. I just received a BluRay release last Friday that contains a concert mix I did using exactly this method for the ATMOS track - no complaints.
Duh, "Reading is FUNDAMENTAL! " So, you just leave everything in the bed and use 3D panner plugs, which solves 85% of the issues. Brilliant! I don’t know why I didn’t get that point the first time I read this. Now, even if I want to use the object panning now, your way frees up a lot of resources to do so. That’s an excellent suggestion. Thanks!
Did you notice the volume difference I mentioned between the bed and the objects? I still want to know if this is normal.
If you’re not doing any panning, why not just use the bed? They (the bed speaker positions) get converted to objects anyway…
From Dolby: “Depending upon the position and size metadata applied to an object, objects and bed channels can be sonically identical. For instance, an object placed in the left front with size set to zero will be identical to placing the audio in the Left channel bed.”
The quote was in regard to the question of using bed vs object if you aren’t doing any panning. (no difference to sound or position)
For home theater delivery (including music as per the OP), everything gets converted to a maximum of 16 elements. These include the 10 object “bed channels.”
Also: “In order to maximize efficiency, spatial coding converts bed channels to equivalent objects at predefined canonical locations. Because of this, the best results are generally obtained by configuring spatial coding with 11 to 15 output objects and one bed channel for the LFE. (This budget of audio signals is referred to as the number of elements in both the Dolby Atmos Renderer and the Dolby Media Encoder software application. Both Dolby Atmos Renderer software and Dolby bitstream codecs support choices of 12, 14, or 16 elements.)”
Detailed info on Dolby’s spatial coding method for home theater can be found here:
So all-in-all, if you’re not actively panning audio around, my take is that beds seem to be the way to go since you’ll be able to use familiar bus processing and monitoring methods unlike with objects.
I suppose though that if there is an aesthetic advantage to placing some sounds in ‘more’ locations a ‘more complete’ ADM file would provide options moving forward. Or in other words; if the primary target is Atmos for home then even though the printmaster process would reduce the fidelity of localization it would only apply to the printmaster, and a ‘wider’ ADM could be used for other target systems with more bandwidth. I guess it’s more of a “I wish I hadn’t merged all that stuff now that I have a chance to play this back in a movie theater”…
Regarding the last question… I assume that you have post panner Metering? Then the object channel Meters show nothing.
Input and post Fader Meters show levels here on my System.
(I would appreciate a pre Fader Metering implementation from Steinberg btw…)
For displaying the renderer you might give “spacedesk” a try, you can set up a tablet (or other devices, even via Browser) as additional screen.
If the primary target is music, then that should be the given priority. Although Atmos is billed as a “write once distribute everywhere” solution*, the reality is that compromises need to be made in order to create a mix which sounds good on multiple platforms. Pick your own poison, I suppose.
*“Write once deliver everywhere” has long been a unicorn for software development. But in the end, reality always gives way to the need for platform-specific approaches and design decisions. Hard to believe it will be any different for Atmos/object-based audio production.
Indeed. This is not just for music, but audio post as well. The problem is, cinemas everywhere are all calibrated differently, it is almost impossible to do more nuanced surround atmos that can translate everywhere. By the time all such considerations are taken, usually, most of the stuff needs to happen in the LCRs, just to be safe. Even the expert mixers would caution you about over-playing with the surrounds.