Workstation Power when mixing feature films

Dear Community,
in the last couple of month/yeats, we continuously grew our business to bigger and bigger project, from small TV dokus to now medium sized cinema features. The bigger the projects go, the slower the workstation becomes (obviously). I might have hit a peak now after counting about 450 tracks total, with all the groups and fx up to 5.1 or 7.1. In my current session I count 16 Reverbs, some delays, Noise Reduction Plugins on dialog tracks and so on. The session gets so slow that I can barely move a fader.
I certainly know, that there are even bigger projects to come, so I’d like to be prepared.

Is it normal, that the System goes creeping in sessions like this? Do others experience the same?
When is it time to split the sessions to go with a 2 DAW Setup running simultaniously?

The setup is:
Nuage Controller (2x Fader, 1x Master)
Yamaha AIC-128 Dante Sound Card
Win 10
i9 10980XE
2x NVidia T1000
64GB Ram

We used to work off an 10GBit/s Storage Server (NAS like). A litte more performance is noticeable when running the Session off an internal SSD, but not really workably better.

The System is put together from an Audio-PC System Professionals, not myself.

@Fredo I know you have a similar setup but with RME cards and PCs from Xi-Machines, right? How is your experience on that matter? I would love to know.

Thanks for everyone who got a thought on this for me.

Hi Hendrik
You have a pretty powerfull CPU, so propablly switching it to more powerfull will only solve the problem for e while or might not solve the problem at all.
I would check system behaviour on various latency buffer settings - from 512 to 2048 - one setting may be better than other and not neceserily the highest buffer.
You may also check if you have “disable VST3 process when no audio on track” on- it saves a lot of CPU power.
Use prores or DNxHD videos - H.264 and other tend to use a lot of CPU

Hi Kamil,
thanks for your reply. I have video running on a different machine already and “disable VST3 process when no audio on track” is checked.
So far I tried 512 and 2048. not checked 1024 though. Gonna do that for what its worth.

Oh and as another info:
We used to work off an 10GBit/s Storage Server (NAS like). A litte more performance is noticeable when running the Session off an internal SSD, but not really workably better.

We never or rarely exceed 250 tracks.
Standard we have our deliverables configured as Output tracks.
We never or rarely use more than 6 Group Tracks.
Standard we have 2 or 4 “pre-group” tracks.
Standard we use 3 x 8 fx tracks (DX, DX Opt and FX)

We manage our jobs by splitting the projects up.

We start with a Dialog assembly project which only has guide video, guide audio, the Mix tracks from location sound and if provided, the AAF tracks from the video editor.
Using the Field recorder tool, we import the needed iso tracks.

After cleaning, this results in a Dialog Edit project ranging from 8 to 24 tracks.
We use offline processing for cleaning, denoising, EQ-ing and other stuff , so the DIA tracks becomes conistent.

We cut in all of the PFX into the Dialog Edit project.

We export a Temp mix of DIA and PFX for use in following projects:

We create an AMBIANCE project for laying out the BG’s (Between 6 and 24 tracks)
We create a SFX project for cutting the hard FX. (between 12 and 48 tracks)
We create an ADR project with only what is needed for ADR. (Between 4 and 12 tracks)
We create a Foley project. (between 24 and 96 tracks - depending on what kind of serie/movie it is)

We import all of the above into our mix project.
Since all of the above Subprojects are recorded/cut against the DIA and PFX premix, much of the stuff already is ballpark in balance.
The only thing we need to do is final balance, EQ end reverbs.

Our groups have very few plugins running.
SA2, Accentize Voice Gate, and Spectral balance on the DX Group. (Sometimes a C4)
Brickwall limiter on all delivery busses.

This is more-or less (It changes a bit depending on the project) our standard procedure.

We didn’t max out our machines (yet)



Hi Fredo,
thank you for your detailed answere.
Our workflow looks quite similar. We also do split up dialog, fx, adr and foley beforehand and bring it all together for mixing.
It looks like our channelcount just got out of hand a little. Other than that there are not so many differences. Ok, hence the channelcount I have way more pre-groups and VCAs to work efficiently, this too then results in even more channels to be processed I guess.

Do we have people here handling similar channel counts or who have experience in splitting the sessions over 2 DAWs? I always see big studios running Pro Tools to have 2 or 3 Playback DAWs and a Recording DAW. Is this really the way to go?
If possible I would like to stay away from splitting the session. The less complex it becomes, the better.
I would love to have more insight in workflows other studios work with.

1 Like

We use pre-groups to avoid VCA’s.
With VCA’s you are writing automation on all linked channels, which is (in my opinion) unnecessary.

Our signal flow = DX tracks => DX PreGR => DX Group =>Deliverables
Each DX track has individual sends, but we only use them in specific places.
Most of the DX reverbs are “send” from the DX Pre-group, so you are only wasting 1 send automation. Any final tweaks or specials can still be done from the individual tracks.

Same for FX & Foley …

I think that dates from earlier times when channel count on PT wasn’t that high.
Also, in the big studio’s, tasks are split up between rooms, studio’s and people, who all bring “their” stuff to the mix stage. During mixing, they stay(ed) at their station, to make changes to what they brouht, or toggle between alternatives.

I don’t think this still happens today.
The second DAW is mostly used to print the stems and deliverables during mixing.

That being said, I am no specialist when it comes to others workflow or setup.
I have always created my own personal workflow.



That certainly does sounds reasonable, yet a littlebit crippling. For example I almost always use aligned Boom and Clip Mics simultaniously but apply more Verb (or only) to the clip in most cases. In the age of multicam shooting most Booms I get sound way to reverberant and I don’t want to even add more, but the Clips get some for the surrounds, etc.

I am gonna have this in mind when I set up my next project and start mixing. For this one I will try to limit the channels I got as good as I can and hope for the best.

I am also gonna try to switch to an internal high speed M.2 instead of a standard SSD and see if that makes any difference.

1 Like

I do the same thing, but I balance Boom and Lav to create depth and distance. So when a character is in close-up, the balance Boom/Lav is about 40/60 or more. In a long shot the balance is 70/30 or less. That way, I create the feeling of closer/further away, and I just have to add a tad of reverb to the combined signals of the DX Pre-Group.

I simply can’t get used to dialog that always sounds “in your face”, no matter if the person is in close-up at 1 meter distance, or is walking at the beach at 20 meters from the camera.

Same for the foley, we play the microphone while recording, so the sound is in context.
So a person walking away is already premixed while recording.
We only have to crank up the reverb in that individual channel to create the feeling of a person walking away in the distance.
Foley not recorded in context needs much-much more work at the mixing stage.
I find it extremely hard to re-create the feeling of having “air” between mic and source in the mix.



Brilliant thread …

Fredo, thanks for all the details I find it very interesting! You should write a book in your spare time ha ha…

Please explain “Spare Time”



lol… :rofl:

1 Like

“Please explain “Spare Time””

yeah I get it …

On the upside, it’s good to have a life where none of your time is just ‘spare’ !

1 Like

I work on no/low budget productions, so I am not doing anything for a lot of money, but I am a project/product manager by profession so I use the same level of disciple for each production. I make a folder for each production where I keep shared resources (video from editor, production audio, sound design, etc.) and in that folder I have a Projects folder where I keep the Nuendo projects. In that folder I have separate Nuendo projects for foley, dialog, sfx, score, and mix down. Each project references the same shared resources, and the audio specific to the project is stored in the default Nuendo project structure. I use SoundQ for all of the sfx and foley I don’t capture myself, and have a shared resource folder for any sound design I’ve done with a VST. I start with the foley project, then the dialog project, then the sfx project, and finally the score. For each project I export stems that I import into the mix down project. The mix down project is what I end up delivering to the video editor. Again, I don’t work on Hollywood features, so I typically do all of the audio myself, even booming the production audio most times.

I have two workstations that I use that nearly have the same specs:

Composition PC:
Nvidia 3090FE
BlackMagic Decklink Mini Monitor 4K
Separate OS, project, and audio NVME drives.

Mix down PC with Nuage:
Same as composition PC but with a Nvida Quadro RTX4000 GPU and a AIC 128 for Dante.

I have three Netgear 2120 NAS units that are behind a Windows server that aggregates them in a single storage pool, that I use to shuttle files around the network and for backups. I also have a VEP server on the network that I use to share VEP projects and offload VST’s if I need to.

On my composition PC I use a Focusrite Gen2 6i6 and typically work with Neumann NDH20 headphones to create each project. I then move to the mix down PC to mix. I have a 7.1.4 Dolby Atmos array with JBL305pMKII’s and Nuage NIO500 I/O for Atmos mixes. I also use a Dynaudio 2.1 system with LYD48’s and an 18s sub for stereo mixes, along with a pair of mixcubes for mono mixes.

I went from a 9 generation i9 to a 13 generation i9 with ddr5, my configuration is similar, Aic-128 - Nio16A converters - double 10Gbit network card connected to a Qnap nas with raid 5 and SSD cache, however fast the 10Gbit, however I find better results by moving the jobs to a secondary nve WD SSD with a transfer rate close to 7000/sec. However, I start to have problems with this quantity of tracks and plugins as soon as I activate the Atmos render… in my opinion I have to necessarily switch to an external render, it still improves a lot by delocalizing the processes of the external plug ins or with waves server dedicated or I’m currently trying vstrack with dante by having the plug calculations carried out by another external machine

My experience you can go pretty big without a super computer, but the thing that will hit the ceiling is automation! There seems to be a limit of how many automation points Nuendo can handle, and this happens regardless of hardware.

We did a huge feature project some years ago with close to 1000 tracks for some reels, and as the automation grew it became almost impossible to work, with something like 2 updates a second when playing. Graphics, meters, movements… everything responded max twice a second.

What were the computer specs in that case? Especially wondering about RAM and CPU.

Hi, this happened to everyone on the project, old to new Mac pro, PC with 32GB ram, 12 core processor etc…

The processing meter on the computer hardly moved, apart form disc cache, still it acted like it had the Ram of a Casio watch.

Ok, I was interested in knowing more specifics about the system just for the sake of understanding just where the bottlenecks could have been. I know that on my older computer the bottleneck ended up being related to calculating spectral displays in some plugins once the project got big enough. So the first thing that came to my mind was if the automation played back correctly and you ‘just’ had visual problems which could be a CPU issue (or maybe graphics). The other thing was if you didn’t just add automation but also added more processing as well, in which case the problem could be the processing and not the automation.

If it really is a problem with automation though I suppose you could maybe test it by deleting the audio sources but leaving the automation (?) to see if it’s still sluggish. Intuitively I’d think it’s either having too little memory, or too slow memory / CPU.

1000-ish tracks is a lot though.

I was responsible for cleaning unnecessary automation to help smooth the project so there’s no doubt that was the issue. :wink:

Still curious what the core hardware bottleneck was. No chance of you accessing the project again? Maybe try playback with no audio playing back or other ways of lowering the load on some specific components? Just for troubleshooting purposes.