Dorico to Dolby Atmos in real time

I’d like to describe my approach to working with Dorico, a DAW and Dolby Atmos simultaneously. This workflow has several advantages that I’ll share after I describe my implementation steps. My hope here is someone who may be considering doing the same thing is helped. This is one more example of how I successfully push Dorico to accomplish tasks that may not be obvious, and to extend its utility into areas that are likely very peripheral for most users, but are extremely satisfying for me. Playback with Dolby Atmos is icing on a very nice cake.

OPERATING SYSTEM: Create Virtual MIDI ports

Setup the virtual MIDI ports that you will use in Dorico. MacOS comes with a built-in utility called Audio MIDI Setup to do this. Windows has third-party plug-ins available. Since I’m on a macBook my screenshots are focused on that approach. For easy identification later, I find it best to create new virtual MIDI ports in the IAC Driver that map to the orchestral instruments I will use in Dorico. In my proof of concept example, I am using the second movement of Mozart’s Piano Concerto No. 21, so I have virtual MIDI ports for Flute, Oboe I, Oboe II, Bassoon I, Bassoon II, Horn in F I, Horn in F II, Piano (treble clef), Piano (bass clef), Violins I, Violins II, Violas, Violincelli and Contrabasses. I also add a virtual MIDI port for the time code that is used to sync Dorico with my DAW. I rename the IAC Driver device to Instruments. I don’t use more than one channel within each port, but that is an option. Here is a screenshot of the virtual MIDI device properties:

DORICO: Check Preferences

Open Dorico. In Preferences/Play de-select the virtual MIDI ports you have created as inputs. This is to avoid any MIDI feedback loops. Screenshot:

DORICO: Check Playback Template

I prefer to load the Silence playback template because all sound will be generated outside of Dorico.

DORICO: Setup and Write Modes

As you do normally, create the Players in Setup and enter your notation in Write mode. I also create additional staves as needed to change articulations via key switches. I use Manual Staff Visibility to hide key switch staves, or I mask them to a lighter color with a graphical element. I use 64th notes to ensure the key switch is sent before the playing note. Screenshot of Violin I and the con sord. key switch in its unhidden key switch staff:


In Play mode, on the VST and MIDI tab, add all the virtual MIDI ports you created. Also add a VST Instrument to load the TXL Timecode plug-in (if you want to sync Dorico’s transport controls with your DAW). Select and activate your virtual MIDI device, and make sure the frame rate matches the setting in your DAW.

Then on the Track Inspector tab, complete the Routing to each MIDI instrument. I use independent voice playback for Piano so I can have greater control over dynamics and mixing, which is why I created separate MIDI ports for the piano’s treble and bass clefs. I use the MIDI Monitor app to confirm MIDI routing out of Dorico before setting up my DAW.


Note - many DAWs have internal Dolby renderers. In 2023 a third party released a plug-in that enables Dolby Atmos music creation in just about any DAW, and it is available for both Windows and Macs. In my case, I am using Nuendo as my DAW and the Dolby Atmos External Renderer 5. Since the DAW and Atmos component of a workflow may have many variations, I’ll give abbreviated steps here. If you want more details, let me know.

  • Open the Dolby Atmos External Renderer before opening Nuendo
  • In Nuendo, activate MTC sync if desired in Transport / Project Synchronization Setup
  • Select Dolby Audio Bridge as your virtual audio interface in Studio / Studio Setup. Set the buffer to 512 and the sample rate to 48KHz.
  • Select External Renderer in Project / ADM Authoring for Dolby Atmos
  • Create Instrument tracks using your preferred sample libraries, aligned with the tracks in Dorico.
  • In the Inspector, choose the input routing to the corresponding virtual MIDI devices you are using in Dorico
  • For each instrument, insert the Dolby Atmos Music Panner plugin. Other plugins can be used if desired.
  • Add a mono audio track and insert the Dolby LTC and Binaural Setup plug-ins
  • Create a Bus for an Atmos 7.1.2 bed in Studio / Audio Connections / Outputs / Add Bus.
  • Create stereo or mono busses for as many Dolby Atmos objects you wish to create, up to 118.
  • Create a mono Bus for LTC sync
  • Make sure the Dolby Audio Bridge channels are connected as desired. Connect LTC to channel 129 of DAB.
  • Route your Instruments to either the Atmos Bed or Object tracks
  • Add automation lanes to control the X, Y, Z and Size parameters of the Dolby Atmos Music Panner if desired.


  • Hitting play in Dorico immediately generates the sounds of the sample libraries in your DAW in Dolby Atmos 3D space.
  • You can generate audio files in many ways: use the Dolby Rendered to create an Atmos master file, or export to mp4. You can record audio in real time in Nuendo by routing the Instrument tracks to a Group track, or record the MIDI and render in place.
  • Starting Dorico is very fast because no sample libraries are loaded.
  • No need to use expression maps in Dorico
  • Dynamics are passed with the MIDI and articulation changes managed with key switches
  • BIG ONE FOR ME: You can quickly make a change to the notation in Dorico and hear it in real time in your DAW and Atmos, without repeatedly exporting audio stems to your DAW
  • You can downmix Atmos to binaural for listening with any headphones
  • Latency is minimal. By selecting any staff in Dorico, I can play notes on my keyboard and hear the chosen instrument sound.
  • This was not at all complicated to set up and use, but it might appear so. If I can do it, anyone can.
  • This is a stable and efficient workflow and I look forward to extending it to pieces that will really benefit from Dolby Atmos. Sorry Mozart!



Excellent, thank you for sharing that!

And at the end it is also making your coffee :grinning:

Why make it so complicated when you can make it simple ? this solution is going to kill your CPU !
Is Dorico sending articulations changes to the DAW ?
How are you going to convert articulations from Dorico to your player in your Daw ?

My approach with Dorico is to work in Logic with VSL instruments and MIR PRO 3D…

When your music is finish, mixed and for sale, you import the file in Dorico only if you need the score to be played by a real orchestra
The score in Logic is good enough to be send to the SACEM so it is protected.

So if my score is played by a real orchestra I will hire a Dorico’s specialist that is working 7/7 to do the job.

Dorico articulations are basic MIDI commands (key switches, Program Change or CC change) - they are easily transmitted via virtual MIDI cables.

The advantage of loading virtual instruments into the DAW instead of Dorico is a whole other level of flexibility and tweakability that becomes available, as well as recording and editing audio.

In Cubase specifically, there are various approaches to managing instruments: a Cubase project can be used like a more advanced VEPro server with individual instrument tracks frozen or purged until needed; or individual instruments (with FX, etc) can be recalled from track presets (similar to Dorico’s endpoints) or imported from track archives.

And on top of all this, a DAW allows embedding multiple audio tracks directly into the project file to be used as a mix reference and offers the ability to render Dorico project either as MIDI (for further editing) and/or audio (for mixing) - simultaneously if needed.

I’m glad you have a workflow that works for you. Mine is almost literally the opposite - I start with the notes and I want to see the notes in traditional notation, therefore Dorico is my starting point. After the notation is progressing, I am next concerned about the sound. So I am so happy I can leverage Nuendo and Atmos to achieve excellent sound without cutting the link to tweaking the notation in Dorico. My little ecosystem works well for me, and is efficient in terms of CPU and memory usage. In my last screenshot, you can see the Renderer is showing a CPU hit for audio processing of 28%. I have not yet experienced any situation close to killing my CPU. But thanks for your concern. Cheers!


It is easy in Logic to insert articulation and 90 % of the time I insert my notes using the score because I am vey bad playing keyboards.
I do not use VE Pro anymore, my Orchestra is done with VI, this because VE adds more CPU, and they are Core problems, the load is not spread accros all the cores of my Mac.
It’s better to have all the instruments IN Logic.
I do not need to bother to purge instruments, they are loaded with all the articulation
I do not either need to freeze tracks
Some tracks like my drum tracks are going to a bus before going to MIR PRO 3D so you have your drums kit spread from left to right.
I do not need to load Dorico and Logic that will use a lot of memory and it is a Pain to constantly switch from one app to the other.
When I bought Dorico I tried the approach of AdagioLearner, after a few day of work I renounce and I put Dorico on the side.
Version after version I have tried to use Dorico instead of Logic, each time I have moved back in Logic. My song have between 80 to 100 tracks. It is very difficult to handle all those tracks in Dorico.
I Logic you can “Stack” tracks by kind, Brass, Wind,… and interfere only on a group of track
The editing in Dorico is very poor compare to Logic and very in-cumbersome.
The Help is very bad in Dorico, in Logic the Help is taking you to the function in Logic.

How are you going to record an audio track not having delays and error messages saying that the CPU is too busy even with pre-load buffers at it’s maximum.
In the Mac there are M2 SSD, this is helping a lot, 5 times faster than an SSD

Interesting approach. I’m also using ATMOS, but for the final stemming in Nuendo. To be honest I don’t see the benefit of this approach as it’s rather complicated, and for working on the music there’s no utility for 3D. The opposite actually, I’d say it’s preferable to work in a kind of clean room, meaning stereo, so you can ensure that your orchestration is balanced as you want.

Music these days goes out into all sorts of formats, mostly commonly now some kind of headphone and especially earpods, but you have to start from a reference, which is two channel stereo. Pushing the output format up front like this is creating a mess IMHO, you’re conflating two tasks which should be separate, which is music making and output rendering. Worse you’re conflating the first task (music writing) and the last task (rendering), making the workflow a mashup. And it’s way more complicated than just doing the Nuendo path I mention.

So glad it works for you but it looks like a sub-optimal approach to me, and I doubt any real world production environment would do anything like this.

1 Like

Hi AdagioLearner, thanks for sharing, this is brilliant! Just the workflow I’ve been needing. One question, you are routing MIDI to nuendo. Are you loading expression maps for your VI in Nuendo or in Dorico?

I use playback templates in Dorico currently and being able to have the articulation automatically applied is a big advantage. Is this retained with your workflow or do you need to go into the midi in Nuendo and map it to the correct articulation (e.g pizz, fltz. Etc).


Hi Eve, I’m glad you found the tips helpful. I don’t use expression maps on the Dorico MIDI tracks. Instead I create hidden instrument staves that contain key switches below the usable instrument range. Dorico passes the key switches to change the articulations for the Virtual Instruments loaded in Nuendo, such as pizz, con sord. etc. Arming the tracks to record in Nuendo will record the MIDI there, Mixing and mastering can be done afterwards in Nuendo if desired. I like to use Berlin Studio reverb in Nuendo to place the orchestral instruments on the stage, which I can monitor and adjust in real time. I have also created automation lanes in Nuendo to change Dolby Atmos object parameters (size and location). I record in the renderer to generate the atmos files. The renderer can also export to mp4. I hope this helps. Have fun!

You might checkout midi regions for working that way -

Love it - thanks for sharing.

That’s really nice! Very interesting and powerful techniques.