Does N12 have Ambix Sn3D as standard output?

Hi,
Quick question, does N12 output ambisonics as Ambix Sn3D? Or do I need to convert it?

Long question:
I’m diving into Ambisonics again, it has been a while and am running into export ambiguity in N12.
From the manual I learned that everything in N12 is AmbiX but there is no mention of Sn3D.

I have all the regular routing in order to create assets for Unity (via the Steam Audio Plugin) and am monitoring through waves nx to get head tracking of the ambisonics (in Control Room). This all works great during the design phase.

I’ve created a test file with 5 positions: LCRLsRs that is exported as 4 channel audio for testing.
However I’m not sure if it really works ‘universally’ as I don’t have a Unity+VR headset here, so I can’t load into unity and test.
I can reimport the 4 channel file to a 4 channel track and it playbacks fine as Ambisonics, but that only proves that N12 exports great for use in N12!

As said earlier: Zoom Ambisoncs Player does not recognise it as Ambix. Is this a metadata issue perhaps? In Soundminer I see that Zoom H3N recordings have metadata but N12 does not add that.

Any help much appreciated!

ambiX is SN3D (for channel gains) and ACN (for channel sequence).

The Zoom Ambisonics Player needs the metadata from the Zoom H3-VR. You can use an app like BWF MetaEdit to copy/paste relevant metadata from a H3-VR audio file so that the player will recognize the file.

That being said, you can do better for A to B-Format conversion and B-Format decoding than the Zoom Ambisonics Player. I like the SPARTA Array2SH plug-in for A to B conversion, and IEM AllRADecoder for decoding. Harpex is also great for both functions.

1 Like

“ambiX is SN3D (for channel gains) and ACN (for channel sequence).”

Ah I did not know that (Sn3D was part of Ambix)!

I just used the zoom player for a test (as I got some recordings that were made with the H3-VR.

Thanks for the tips!

@Kewl thanks for your info again.
I’m now getting feedback from the developer that the spatialisation of the test file is ‘weak’. In N12 i clearly hear separation between ‘LCR’ positions, but am doubting about the Rear sounds.
Could this be an encoding issue? I would love to hear some other test files, but don’t really have those. Simple front to back ambisonic 1st order files with vocals announcing position. I’ll do a search but any help is appreciated.

The “weak” “test file” you’re talking about, is it something you delivered to the developper? If so, what are you delivering? Ambix file? Decoded (speaker-ready) file? If ambix, what order? If you’re delivering 1st order, of course spatial resolution is not very high. In any case, describe what you’re suppose to deliver and what is likely to be done to the files you’re delivering.

Hi and thanks for the very quick response.
The file I delivered was a test file, to make sure the headset played it back correctly (unity on vive + steam audio plugin for ambisonics). So decoding would be done in the engine in realtime.

I’ve sent file as Ambix (first order), undecoded for implementation in Unity.
A simple L C R Ls Rs vocal playing back and spatialised. No top bottom info.

I’m a bit surprised because when I playback the delivered file in N12, it decodes/sounds the same way as it did on the N12 timeline.

I will test some more but find it a bit ambiguous at the moment.

Check if the developper (and the software/hardware used) can accept 3rd order. If so, modify your N12 project to bring it to 3rd order. If you have 1st order recordings (from the Zoom H3-VR or other 1st order microphones), you can augment their spatial resolution by using COMPASS Upmixer (free) or Harpex.

1 Like

And the success of binaural audio is always dependent on how close the binaural model is to your own binaural system, in particular the shape of the pinna which is responsible for the median plane (front/back, up/down) perception.

1 Like

I don’t think 3rd order will be possible. I’ll check it but am not very hopeful… although sound is important it is not the biggest agenda point in this production. Also they don’t want to be tight into headset plugins… so no Vive DSP plugin.
Good reminder about binaural models, this is the weakest point in the chain.

Little update:
I’ve demo-ed the ambisonic files I created in N12 in the vive headset and the effect was not very convincing. The spatial representation in the headset was just not very good compared to listening in N12 (via waves NX). It was ambisonic and did give the spherical effect, but the sphere felt smaller than in waves nx. Also it seemed like the directionality of sources was blurrier.
I’m not sure what the reason for this difference is and am scanning the web/article, but cannot really find any reference to this ‘issue’. I thought maybe Steam Audio Plugin is using a mediocre SOFA file or Waves NX has a very very good one.
It’s also frankly quite hard to search for something that is not easy to put into words (spatial quality).

For the record I am mixing everything in 3OA and am ‘downmixing’ to 1OA with ambidecoder and this sounds good (not magical just good enough for this project/scope). I have upmixed the H3VR recordings, this sounds a bit better indeed, thanks for the tip @Kewl !.
I’ve also double checked my ambisonic files with Sparta and IEM and although there are differences in sound it is not as apparent between Waves Nx and Steam Audio.

Are there other things that could be the source of the spatial difference? Is this maybe just Unity making it hard? It only allows 1st order files, but again in N12 these sound ‘good’ in the headset ‘okay’.
Are there secret settings I should be using? Or should I import a custom SOFA file? I found KU100 sofa files, but not sure if that would make a difference.

Any advice is much appreciated

Probably the Waves binaural model is closer to your own (pinna, head width, etc) than the Steam, so Waves gives a more “natural” rendering.