Has anyone tried the Airpods Max for testing Atmos mix playback? I’m debating whether to get a pair, but they are extremely expensive, and Bluetooth only connectivity means latency problems that rule out using them for real time recording/mixing. They would just be another source to check mixes after the fact, basically. That is something I want, since I only have one Atmos reference system and would love to hear a mix through another legit Atmos decoded source - but getting them for that and only that seems like a whole lotta money to do just one thing, and maybe not even do it that well. Anyone have experience with them?
It doesn’t work. as far as i can see. my air pods max set the buffer size to 32 (max is 384 or something) and atmos requires 512. unbelievable really. with Logic Pro you can have separate in/out devices, not with nuendo. you have to make an aggregate device but the buffer size problem remains.
My workaround is to use SoundFlower as the output from Nuendo and route it into Mainstage, where I output to my Airpods. Works fine - although with high latency. That way you can still set Nuendo to 512 buffer size.
I am stumped right now, because I am not clear on how you guys are routing the Atmos signal out of Nuendo so that you can then listen to it with the AirPods Max?
I’m guessing it’s not an actual Atmos signal at that point… I could be wrong of course…
soundflower, blackhole etc are pseudo devices that you can attach a program to each end of the channel to route audio between programs.
it should be OK, it is a binaural signal that has already been decoded.
Right, but what would be the particular advantage of that?
If you are decoding to binaural before the Airpods, aren’t you just sending it a standard, head-locked 2 channel output which you could send to any headphones?
I would think the OP is trying to use the Airpods for decoding, in which case what kind of stream can you send it for that purpose? DD+JOC? AC-4?
Nuendo doesn’t output files in those formats, let alone stream them.
Nuendo, cubase and logic don’t stream the unencoded output but you can downmix into any speaker or binaural format to see what it would sound like. i check my mixes on headphones and on a 5.1.2 system, which is all i have at the moment. since most people will be listening on pods or headphones they (IMO) are the most important downmixes to listen to.
Again, I think the point here is to look for ways to monitor the Atmos content to test the way consumer devices render the Atmos. When you are using a Nuendo downmix, you are only hearing the internal renderer’s (Nuendo’s) interpretation of the Atmos metadata. The primary feature of Atmos, and indeed object based audio, is that the rendering adapts to the type of device that you are using and is decoded differently for different circumstances based on factors such as channel count, speaker placement, and head tracking. Using only a channel-based downmix defeats the entire purpose of the Atmos format.
The Atmos renderer in Nuendo itself offers settings for the downmix. (Which is based on the Atmos project and therefore, unlike the downmix of e.g. Dolby TrueHD with Atmos by an Atmos-compatible AV receiver, is not channel-based. Unless, of course, the project itself is CBI.
But the Atmos renderer is unfortunately a black box. Therefore, you can not really say what is going on in the background.)
The problem is rather that encoding into a consumer format massively changes the Atmos signal (keyword: clusters). You can only test this if you really encode a mix into a consumer format (TrueHD with Atmos, AC-3 JOC or AC-4).
I already knew that.
My thought exactly. That’s also why I asked my question.
Since the original question mentioned the AirPods Max in the same breath as Dolby Atmos, I thought it was specifically about this “Apple Atmos thing”.
But when I read the answers here, then it’s just about using the AirPods as a monitor. Well, since Apple’s Spatial Audio isn’t the same as Dolby Atmos, I’m not quite sure how beneficial that is. To listen to binaural Atmos, you can use any other headphones. You don’t have to buy the expensive AirPods for that.
But if someone gets the AirPods as a gift for Christmas, they are of course welcome to use them with Nuendo.
To check how “Spatial Audio” sounds, it’s not enough to have Nuendo output a binaural Atmos signal. In addition to the Apple hardware and software, you also need a consumer encoder (DEE, DME, etc.).
Thanks for all the responses so far. It seems obvious to me that this isn’t a workflow that will work. I can always render out a mix in Atmos and play it back later through Atmos-enabled phones, but that isn’t the same as mix monitoring, as has been pointed out here. So really, there is no way to monitor an Atmos mix in any headphone environment accurately - it would always involve a translation to binaural two channel output, and not in real time. So in practice, there is no way to accurately hear an Atmos mix anywhere but in a proper Atmos studio with at least 7.1.4 monitoring, as Dolby requires from a mix room. Am I wrong about this?
Correct. And to go one step further, even with a 7.1.4 system, there is no way to hear how an Atmos mix accurately sounds on Apple AirPods (that is, Apple’s spatial audio rendition of Atmos) unless you use Dolby Atmos Renderer to spit out an mp4 which you send to your iOS device for QC-ing.
What we really need is for Apple to allow 3rd-party DAWs to utilize the same plug-in Logic has in order to monitor spatial audio in real time. Very foolish of Apple if they are purposely dragging their feet on this…
So Logic has a plugin that renders an Atmos mix into an Atmos consumer format and then decodes that for Atmos binaural?
Yep. [insert jealous emoji here]
I think my question was probably unclear. What that link says you can do in Logic has to do with the Dolby renderer, not with Logic or Apple. In the renderer you can set it to output Atmos binaural. But that’s a slightly different proposition than what was discussed earlier.
In addition to that we can already do the same in Nuendo’s Dolby renderer if I’m not mistaken.
Yeah, being a music producer, I sometimes forget this forum is mainly post pro users. Since the OP mentioned AirPods Max, I assumed he was concerned about monitoring for Atmos playback on Apple Music - which requires a specific workflow to accommodate Apple’s spatial audio. At any rate…
Atmos binaural is very different from Apple spatial audio binaural. This is because Apple is using it’s own renderer for Atmos/multichannel mixes to produce spatial audio.
So Apple’s Spatial Audio Monitoring plug-in is the only way you can monitor in real time what your mix will sound like on Apple Music (the defacto standard for Atmos Music playback at the moment). In the Logic plug in, you can select the Apple Renderer to hear Apple’s spatial audio binaural mix, or you select Dolby Renderer to hear the Atmos binaural mix. In Nuendo, we can only monitor the Dolby binaural render, which is all but useless if you want to release an Atmos mix for Apple Music.
The way you’re writing this is confusing (me).
Is it Apple’s own renderer for Atmos or for non-Atmos multichannel? It looks like it’s not for Dolby Atmos, which is what the thread is about.
Another way is to buy Logic and import your ADM file to it, than remix it with the AirPods Pro and monitor as described with the Apple Renderer. That’s what I would do at least.
Can someone explain what Logic’s plug-in does that Nuendo’s Atmos binaural renderer option doesn’t do? If I’m not mistaken you can listen to a binaural output of an Atmos mix from the Nuendo renderer - but that is still not the same as listening to a full Atmos-encoded mix that is decoded by the playback device (with the playback device having Atmos playback capability).
While I’m glad I haven’t spent $600 for a pair of Apple Airpod Max, they are supposed to allow listening to an Atmos signal, decoded by the software supplied, which also specifies “Atmos”, not “Spatial Audio”. It doesn’t matter since what I originally was asking about was the ability to use those phones to monitor mixes in real time, which we have deduced is not possible.
So basically all the video I’m seeing from Dolby/Nuendo of people mixing through phones is really NOT Atmos mixing at all, it’s an Ambisonics-based facsimile of an Atmos mix environment that outputs a binaural signal to the phones - no actual rendering or decoding of the Atmos mix at all, because it was never encoded Atmos from an actual Atmos encoded stream - it was just the Atmos mix placed into the Ambisonics “template” if you will, and output to the phones as binaural two channel. Am I correct about this?
Man, this is all really, really convoluted. On top of that, I’ve never been very impressed with Ambisonics binaural listening. Putting that into the monitoring chain gives me no joy. Spatial audio through binaural means has always left me wanting. Anything panned to the rear or moving in a specific pattern seems to get wishy-washy and does not convince. In my experience you can get an overall sense of 3D sound from Ambisonics, but try to get really specific with movement or positioning of sound and it fails (at least this has been my experience, I’d love to hear other people’s take).