WHY is ATMOS routing so complicated?

A primary function of assigning a sound or instrument to an object is that in the Atmos renderer you have mix control over the binaural sound for headphone mixing. By assigning an object’s binaural location to off (true stereo), near, mid or far you have a lot of control over the final binaural headphone mix.

Listening on Apple products with Airpods via Apple Music they are using their own algorithm called Apple Spatial and disregard the binaural metadata (another conversation.) On Amazon Music, Tidal and Apple Music with wired headphones the listener receives the Atmos mix via a binaural playback based on the mix engineers choices of off, near, mid and far.

The objects allow the most flexibility of that binaural translation. For example a drum kit is set to off to create the most in your face drum sound we tend to enjoy in many genres while perhaps a synth, organ or pad part is set to mid or even far to add a sense of depth and dimension to the part adding a more 3d space to the headphone mix. Perhaps you could send the drums to the bed and assign the pad to the object.

Another common approach in music mixing is use of Object Beds which is an approach that Steve Genewick of Capitol Studios helped to create. That is a system of creating stereo pairs of objects assigned to “zones” in other words. Front = L R, Wide = Wr, Wl, Side = Ssr, Ssl, Rear = Rsr, Rsl, Top Front = Tfr, Tfl, Top Rear = Trr, Trl. This allows some unique blends to add something to the front and blend in the rear channels to “pull the sound into the room.” An added benefit to this approach is to create object zones that can have different binaural settings to enhance the binaural headphone mix. An added benefit to this approach can be managing system resources…ie less objects.

3 Likes

One thing I’ve noticed in the last couple years is that Atmos advise from the perspective of post-production and film is very different from that of those engineers working in music. As such, there’s a lot of confusion in a forum like this, where folks are not necessarily specifying their end goals.

2 Likes

I’ve read this thread 5 times, then followed everyone’s advice, including allklier. So far, I haven’t succeeded in configuring the Lfe circuit.

It is said that the aim would be to process the Lfe frequencies via a mono group sent to the Lfe output of the main output. This doesn’t work here. Any return to the single Lfe channel of the main 7.1.4 output (where the renderer is) gives no sound.

The other goal would be to register a Lit Lfe group as an Object. That doesn’t work here. I wonder though (a question in passing) why we would want the Lfe stream as an object, since an object is made to be moved but Lfe frequencies (below 120 Htz) are not directional.

I’ll keep looking. I absolutely must configure Lfe, as part of a viable setup, before pursuing Atmos projects.

1 Like

Last night, after being so frustrated I was ready to return my whole package and pay the ransom to pro tools (I have 10 years experience or just quit any Atmos aspirations, when i literally stumbled across a document that was not included with the documents included with my Atmos Suite (link below),

I am not clear on what is your Lfe issue and possible this document will help. I am not totally clear on what your Lfe issues are and what processing is involved, or why they need to be part of a group?

Hope this document helps…s

Dolby Atmos Renderer guide

I think there was some confusion maybe. The idea isn’t to create a group for LFE and route it to the output on which the Renderer is instantiated, that would of course bypass the renderer. Instead the idea is to route LFE signals to a dedicated LFE group and route that group to a bed. That bed then goes to the renderer, and since beds have the .1 included you get the LFE rendered correctly.

Exactly. I think many people agree with that. And on top of that objects are full range so panning them around in a playback system that is also full range (according to spec) should be no problem. If the rears or top speakers aren’t literally full range the system should do bass management on its own anyway.

Not really, because it involves the Atmos Production Suite, which I don’t use. I actually understand the whole rendering system (maybe that’s a bit too pretentious: I mean for my purposes), except for the separate Lfe.

Okay, thanks Mattias. That’s what I’m trying to do. I’ll get back to you.

Udate: Ok, I’ve done something.

Example:
A) Stereo track routed to a 7.1.2 bed, which is a bed in the AMD Authoring + Send to B.
B) Lfe mono group routed to Lfe mono output (child)
C) Lfe group as bed.

If I place a Testgenerator with a sound (sine) on the audio track at 100 Hz (with Send to B), it renders well to the renderer.

The problem with Group Lfe as Lit (C) is that it uses 10 objects that I’d need elsewhere. I can reduce it to 5.1, but it still takes 6 objects, whereas I only need the Lfe channel (one object).

Question: Sine at 100 Hz, since the stereo track is routed at the base to a Lit, enters the front left and right monitors (I have to say that my Focal Solo 6Be go down to 40 Hz). If the Lfe is capped at 120 Hz, there will be duplication in these frequencies (between 40 Hz and 120 Hz), at least at studio monitoring level. This puzzles me. The answer, which is very important, would be to use Lfe only in very specific cases of special effects, on dedicated tracks.

Here’s how I was thinking about that in my last mix:

I set up two dedicated 5.1 audio tracks for LFE specific content (since I do get some 5.1 content with LFE from libraries, but there could also be several other mono tracks as well.

Then I setup two aux buses ‘LFE Treated’ and ‘LFE Straight’ that I can route to from these any other applicable audio track that has LFE content.

In the end they all flow into the FX Bed (keep in mind I do film post, so we have usually three beds for DX, FX, and MX). So the routing to the actual LFE output channel in the renderer goes via the FX Bed’s .1 channel.

The LFE Bus Treated splits into two - one which has an HPF filter at 120Hz, and a send to the Treatment FX bus. Both of those forward to the LFE Bus Straight, which goes to the FX Bed.

On the LFE Treatment I have the Subgen plugin to generate sub harmonics that may be missing in the original audio, plus the Nugen mono filter to deal with phase issues, and then a 120Hz LPF This is a Stereo fx channel, that the plugin turns into mono, and then gets routed directly the LFE sub channel.

Also as you see in the panner, you must make sure that the LFE signal is being let through.

All this achieves is to actually extract, enhance and the direct the LFE part of the signal into the LFE channel of the FX bed and from there to the .1 in the renderer.

The LFE Straight bus is more or less just a pass-through that has the panner also set to allow LFE signal to pass through. So signals that go through the regular bed are full range, but don’t go to the LFE channel (keep LFE mix in panner off), and anything that goes to the LFE is frequency split between the main channels and the LFE to avoid duplication.

As you can see in the first screenshot I have two library pieces - one is an urban explosion SFX that has good LFE content itself, and just routes to LFE Straight. The other one is another explosion that also is 5.1, but the LFE channel is empty. This goes to the treated aux bus, where the subgen generates a bit of LFE content.

To get signal into the LFE channel from a stereo track, you need to specifically cross feed it into that channel, either via the surround panner LFE knob, or the MixConvert V6 LFE slider. If you just route a stereo track to a 5.1 track without properly cross feeding, your LFE channel will remain empty.

Hopefully this answers the frequency duplication question. Also, this setup does not take up any additional objects or beds in the renderer. This is all about preparing the LFE signal before it’s being routed to some bed that has a .1 output channel (5.1 or above).

Screen Shot 2023-06-18 at 2.29.56 PM

1 Like

I apologize for still being unclear.

My suggestion is basically taken from surround mixing procedures and since you noted before that low-end is omindirectional which makes it questionable to have a single LFE channel as an object my (unclear) suggestion was to not use “LFE” as an “object”.

Instead: Route source track to LFE mono group track, route LFE mono group track to Atmost bed track LFE child-bus.

Now, I’ve actually not routed that in Atmos but I’m guessing it’s possible to create a child bus for the LFE as usual. Doing it this way gives you a dedicated LFE group that you use to ride the LFE channel level as well as use processing. It then ends up in your bed track. This is what I’ve done for surround - a dedicated path for LFE content only.

This appears to me to be s.o.p., or best practice, for surround. I’d imagine the same would apply here.

I think you were clear, and I didn’t route the Lfe in Object. I made it a Bed. But the bed contains objects that are channels. See image: The Lfe bed has ten objects (out of a possible 128), identified on the right.


I look carefully.

Hi cmbourget

My head is still in the 70s in Star Wars so I don’t understand why you are routing objects to the Lfe bed. Do they move? To me Lfe is to enhance effects and music with additional bass. E.g. make an explosion bigger. Or enhance a bass solo.

Below 200 Hz the ear has an issue establishing location and movement.

Please I would like to learn…steve

Right, but you wouldn’t need to use a dedicated bed in Atmos for the LFE (unless you want to pan it - which we agreed wouldn’t make much sense) - you could use the mandatory 7.1.2 bed for example. I think you should be able to create child buses there and route from your group to that default mandatory bed’s “.1” LFE channel.

Unless I’m missing something.

Again, the many channels would the way I work not be needed, because the LFE deals with mostly non-directional content reserved for the LFE channel, so only the one channel is necessary and no panning.

Perhaps we’re misunderstanding each other.


1 Like

Hello Murrysdad,

I understand. It’s that the bed, as I was saying, transforms the 10 channels (from the 7.1.2 source) into 10 objects. It’s automatic, it’s not my decision. You can see it on the renderer.

p.s. I’m trying to understand Allklier’s scheme, which seems to use fewer objects.

1 Like

No, I understand you correctly (or sometimes it’s the translation, I apologize). I’m actually using an extra bed. I can, as you say, send the Lfe to a Lfe child of an existing Bed. I’ll look into it and thank you.

Update: I just saved the 10 objects by routing to an existing bed. Thank you!

2 Likes

Hi I’m a super-novice when it comes to Atmos set up (music). I set the render for 7.1.2 and my first 10 tracks go directly to beds. Any subsequent tracks tracks I send go into assignable objects. I’m curious why things you are sending to beds show up on the renderer as objects…steve

Seems like perhaps you resolved your issue, so this may be redundant but for the sake of conversation…

When you set up a session in Cubendo or any Atmos session in any DAW you have what is referred to as a Standard Bed ie. Objects 1-10. This is standard based on the Dolby Atmos Renderer which is built into Cubase/Nuendo.

To my knowledge you cannot avoid the standard bed and it is essential to any Atmos Mix. It is a channel based configuration of 7.1.2 similar to a 2.0 stereo channel based configuration but more channels.

Contained within that bed is LFE (channel 4) which is a part of the bed output.

In my use for music mixing I create a mono group called LFE AUX and add an eq filter low pass or high cut to somewhere below 140hz depending on material. That channel is routed to the LFE in the standard bed ie 4.

If I want to send info from a bass or kick etc. I add a send to that channel and send some low frequency content to the LFE to enhance that sub information. My understanding is to not go too far and use that only as an enhancement of a low frequency signal and blend into the regular signal of that instrument.

In the panner there is an option to send a sound source to the LFE in the standard bed from there. My understanding is to avoid that use of LFE because some systems will not process LFE effectively and will in essence playback the full range of that instrument missing the low pass filter. That is why we create a separate group, add a low pass filter and send to the bed to control the LFE and not rely upon consumer system translation.

2 Likes

Would you mind posting a screenshot of both the renderer plugin and the authoring panel (or equivalent for external renderers)?

On routing, I have to answer that if I do that here (the AUX on output 4 of the standard bed), I never hear the Lfe. I need to configure a child output 7.1.2, and it’s its Lfe (child) output that accepts the AUX. And then I hear it. But you also need to go through a mono AUX (which is what will be processed by EQ and others).

Here’s an image: testgenerator at 100 Hz on stereo track —>Send —> Lfe mono —> Lit 7.1.2—> Renderer.

Hello allklier, this configuration is a bit complicated for me, but I just gave mine on another answer, which I copy here. My question for you: the testgenerator is in stereo, so no Lfe. I send it via Send to AUX mono, which goes to the Lfe channel of a children’s group developed in mono channels. And from there to a bed. All my tests work. But you’re saying it won’t go through the rendeder? Do you think I’ve made a mistake in my routing?

Copy of my reply :

On routing, I have to answer that if I do that here (the AUX on output 4 of the standard bed), I never hear the Lfe. I need to configure a child output 7.1.2, and it’s its Lfe (child) output that accepts the AUX. And then I hear it. But you also need to go through a mono AUX (which is what will be processed by EQ and others).

Here’s an image: testgenerator at 100 Hz on stereo track —>Send —> Lfe mono —> Lit 7.1.2—> Renderer.