Has anyone successfully solved the riddle of getting a 7.1.4 group into the Atmos Renderer?

That’s most certainly a meaningful way to look at it when creating audio for media, but the top layer is used with great success for music recording and production. Using only two channels up there doesn’t sound good, and it translate poorly into most other relevant 3D formats, most notably Ambisonics.

@MattiasNYC indeed sends to Top Quadro isnt possible, but I have created a second quadro group and assigned to the Top Quadro output via direct outs instead.

From this I have perhaps stumbled upon a possible solution late last night. I’m still investigating and would appreciate comments from you guys. I will post screen shots from my DAW shortly.

@Rajiv_Mudgal you have a good point regarding render quality of beds vs objects. I suspected this to be the case but have yet to see anything official about it. Because of that suspicion I have generally only used the top 4 for extra mics and reverbs for the music mix. It’s a shame this has to be so.

I read that many engineers even keep away from the top 4 entirely for media music and just stay in 7.1 because of the artifacts of downmix, causing it to be washy etc
I was checking my downmix to 7.1 as I go but apparently the formula for downmix is a case of whatever suites. Within your own sphere anyway. Even though Dolby in the box downmix appear to be consistent. I find discrepancies everywhere. Is it -3db for the surrounds or -6db? I go by this

because it sounds right. -6db always seems to lose level of surround only instruments. And I dont want to base my mix entirely around the front L/R. I am finding myself want to use the sides as my new L/R and this seems to be a trend on new stuff in streaming channels. Leaving space in the front L/R for feature instruments or stereo foley/fx in post.

@Dietz, agreed top layer sounds great for music, but also in music for media I would ideally like to use the tops and keep the separation between top front and top rear. In my 7.1.4 room I can really hear the difference. Additionally, in 7.1.4 wav its ready for all the other formats. A good base channel count to mix too. Most studios that have gone to immersive monitoring are 7.1.4 for these reasons.

So by creating a second Top Quadro bus called “top quadro B”

“Top Quadro A” then became available in the atmos editor.

And you can see that I can use sends from my MUSIC 7.1.4 BUS group

I’m about to test this to make sure the top four are discretely sent to the atmos renderer. So it may just be false hope. BTW I havent added my 7.1.4 music mix audio tracks yet.

Wierd! So initially as above, creating the second “top quadro B” group and its assignments as shown, jigged the atmos renderer into seeing “Top Quadro A” as an option for an object.

And then I was able to complete my assignments of the MUSIC 7.1.4 BUS to the correct sends, and in send panning I muted top 4 in 1st, and 1-8 in 2nd, removed the Mix Convert Insert (this had no purpose but may have contributed)

I was then able to delete “top quadro B” after changing the direct out of “Top Quadro A” from “top quadro B” to the child output “Top Quadro Out” .

It’s possible that I could have set up this end result from the beginning. But I’m sure I tried and it wasn’t possible. Anyhow, there you have it. I will now test to make sure it all works in context and doesnt fall in a heap.

1 Like

Of course! The difference is night and day. Top layers with only two channels are totally unsatisfactory, they disturb the overall spatial impression rather than helping it.

I would like to know what exactly is meant by this?

Dolby at its current delivery state seems to be processing overheads differently, I could hear quite a bit of difference in the way objects were sounding tonally dissimilar to the sound being reproduced by the floors even though the speakers are calibrated.
When contacted I was told that the Object based processing uses psychoacoustics, advance intelligent focus algorithms to reproduce the way humans hear and process sound but that is subject to change as technology evolves as Atmos has all the necessary data to scale things up, that is, the way location, position and motion is phantomed to create immersive sound bubble
This was four years ago so I am sure some of it might have changed.

I forget exactly how it went but I think there was basically a “grouping” of objects into fewer ones in order to save on bandwidth for ‘lesser’ formats. But I think this takes place during the mastering, not mix.

That’s true (and you can toggle this in the DAR with the “spatial coding emulation” setting).

I’ve been checking out mixes on Apple Music lately. The newer stuff that sounds good has some very light ambience in the tops and rear surrounds, but not much in the way of steady instrument content. (Just short fills now and then.) Basically, an enhanced “quad” seems to work nicely. And there seems to be quite a bit that ignores the center speaker, too. (Fine by me!)

Have a look at this.

Encoding and Delivering Dolby Atmos Music

And this. 1:53:47 / 2:18:08 EIPMA WEBINAR 2 Music - YouTube

and this. 1:53:52 / 2:18:08 EIPMA WEBINAR 5 ATMOS Series Recap - YouTube

2 Likes

I see! You speak of “spatial coding”. :slightly_smiling_face:
Unfortunately, Dolby is making a big secret out of this and is only giving out very general information about the process.
Whether and to what extent the signal is affected by objects depends on several factors: The number of objects used, the position of the objects, the codec used, the number of clusters, etc.
When using TrueHD, for example, objects should be restored without loss. The situation is different with E-AC-3 JOC.

By the way: Spatial coding affects not only the objects but also the beds. So in my opinion you cannot say that objects are always treated worse than the bed when encoding into a consumer format.

Right, because the beds get converted to objects for lossy streaming. (And again, you get 16 objects total.)

What really makes things confusing (misleading?) is all the YouTube tutorials on mixing for Atmos, where the engineers are clearly mixing from the mindset of Atmos for theatrical, where delivery can accommodate 128 lossless objects. In the real world, where we only have 768kbps for the music, that’s just not going to translate.

Apple’s recommendation for mixing for spatial audio is to stick with the bed and use objects sparingly for a “few featured sounds.”

So in my opinion you cannot say that objects are always treated worse than the bed when encoding into a consumer format:

I am sure that the beds are compresesed and streamed on the fly but it certainly is given priority. At least it appears so.
Coming from Post we have lived with the potential of opening the rear and overheads as visual aided focus for the LRC as all the action happens in the front…but lately there is much more weirder things going on with the overheads and rear for Music than the usual 7.1.4 routing to the speaker as its one thing to encode and send it to the renderer and some thing totally different to decode and fold it back to the speakers using purely the metadata. It is in the process making some intelligent decisions, and obviously that’s what’s codecs are supposed to do but are there two different algorithms at work, one for Music and the other for Post…I have my doubts.

This may not be related with the topic in question but I was at an exhibition and hearing Sennheiser AMBEO Atmos Hometheater system kinda spooked me. As usual I am pretty attentive at hearing from the front, but I felt as if It was playing with how my brain was processing rear and overheads creating a spooky presence that made me turn my head over my shoulder to see. And my first thoughts were, was this mixed into the original, if so how, does it take the listener into consideration, does it preempt as to how its going to affect me, does it have motion sensors, does it take my position, location and movement in space into consideration, does it use Sonar or some sort of echo location to build a image of me in motion with all those tweeters firing everywhere, or does it just used AI to tweak into how and what the listener hears and so it is doing much more with the metadata than what you and I did in our room.

So back on topic more. If you are delivered 7.1.4 music in multi bwav for a film, such that you need at least 2 tracks where the multiwavs overlap how would you deal with it for an atmos post mix at re-recording stage? How are you splitting off to bed and object in order to keep the top array intact?

@Rajiv_Mudgal I get that you suggest duplicating the audio as you described above. I gather then you tried using a 7.1.4 group and you couldnt? I have tried and havent been successful as yet.

And to the wider group if your answer is, ask for the music to be delivered differently, eg 7.1 .wav only or just fold down from the 7.1.4 group to the 7.1.2 bed? Losing front\rear top separation.
Would that approach be different in a big budget, no holes barred film cinema mix or netflix streaming feature scenario?

@Dietz point of, it matters in atmos music to have separation between front and rear tops, also resonates with me. And in a no holes barred film mix scenario I’d be asking the music mixer to keep that separation so it can be retained at re-recording stage. It also makes sense for the purpose of separate soundtrack release.

How then is music delivered to the dubbing stage in 2023 if it was freshly mixed in a 7.1.4 speaker room with an entirely Nuendo pipeline to produce atmos deliverables.

You will laugh, but I actually deliver plain 7.(1.)4 mixes and stems (if required) … actually it’s more like 5.(1.)4 for more TV-oriented productions. - The “(1.)” just means that I don’t use the LFE in most cases.

Sometimes the composer, the mixing person from the dubbing stage and I will talk to each other and decide on a handful of individual elements that are delivered separately. These (musical) elements are taken care of during the film mix. Otherwise, everyone has been happy with my purely channel-based 3D scoring mixes so far, as far as I can tell. :slight_smile:

2 Likes

This is exactly what I have. 7.1.4 wavs. My guess is that even the majority is or will be this because we often dont need to be panning instrument objects around the place.

So the other part of my question then: How do they deal with the top layer of your 7.1.4 wav in the final mix at the dub-stage? How is it bussed to the music bed and objects. Are they just folding it down to 7.1.2 and losing front/rear top separation for atmos?

I assume many of them are using Protools at the dubstage. Any ideas how it would be done in Nuendo?

Hi…technically a 7.1.4 wave/bed does not physically exist in Atmos. The 7.1.2 overheads are phantomed into a dome like experience where as 7.1.4 exist in a box with metadata.
Thats just the nature of the beast, and even though there is no one shot solution, there are several ways to skin the cat which I am sure you already must have tried.
What I showed above was that you can split and send the floor to 7.1 and the overheads as Mono objects, which can be fixed or panned and this does not increase your objects count.
You can now use the reverb plugin just on the overheads by muting the rest. Both Methods works.


1 Like

Thanks for the reply @Rajiv_Mudgal I do appreciate your time and what I am saying below certainly is not intended to be negative (in case it looks that way).

Yes I do know that 7.1.4 bed doesnt exist (a senseless limitation from Dolby, it should be 7.1.4!) and that the 7.1.2 bed overheads are phantomed. Hence my questions in the previous posts.

So in the second photo showing PSP plugin we cant see what you’ve done to your test audio track. I assume you have duplicated it? including the audio event (like we previously discussed). Please show whats behind the PSP plugin? surely it have to be a duplicate of your test audio? Also note there are 7 unused objects wasting bandwidth. Is this avoidable?

And as I have previously said this seems a like cumbersome solution. Is it the only way you could get it to work? How come what I’m asking isn’t clear, we seem to be just going in circles. Is it my wording? Besides, the topic title says it all. Although the difference is the OP was content to fold down 7.1.4 to 7.1.2 and stick strictly to beds. I will try again.

I am trying to retain front/rear separation which means the top layers need to be sent as 4 objects panned discretely in the corners. And I am trying to do it from a 7.1.4 group because there are too many 7.1.4 stems to be doing it individually.

Consider my and Dietz scenario where we have say eg. eight 7.1.4 stem wavs per cue. And in some instances a cue’s tail will overlap with the next cue, so they need to be staggered across at least two groups of tracks. This would mean at least 16 tracks.

Are you saying the only way is: to duplicate all these tracks, in my example it would be 16 duplicates (giving 32 tracks) and individually assign them to beds and objects?

Surely it would make sense to assign all the stems to one main direct out, a 7.1.4 group and then deal with that. I have also tried sending (direct outs cant be manipulated/muted like sends can) each audio track to 2 main 7.1.4 groups, and used muting via eq plugin to give one for 7.1 and the other for the top 4 and avoiding unused object channels. Thus using a Quadro Top group. But this is where I’m not getting success. So:

a) In the case of one main group: How can I break off the 4 top channels from the group and send them as objects.

or if that isn’t possible due to Nuendo bussing solutions. Such as Nuendo send panning Mix Convert annoyingly doesnt have top front/rear separation.

b) send all 7.1.4 music audio tracks to 2 main groups: where one is for 7.1 to bed, and the other is for top layer to 4 objects. This is what I cant get to work, and dont know if Nuendo can do it. Ideas anyone?

Hi
No its not a duplicate.

test.npr (316.7 KB)