The main problem I’m currently struggling with is poor track syncronisation in a mixdown audio. All the sample rates and bit depths are the same for the mixdown, the project settings and the studio setup. I’ve been seeking for the solution for quite a while and this is what I’ve come to.
I know that if two audio tracks are exactly the same (including syncronisation in time), with phase inverted for one of the tracks both the tracks played together suppress each other and we do not hear anything.
I tried to render several guitar tracks which are routed to a group track. Then with phase inverted I played the initial group of tracks and the rendered track and tested if there is silence – and there wasn’t! I had to place the rendered track almost 22 miliseconds later to make the tracks fall silent.
I removed everything from the group track to exclude any influence of plugins, equalisation, etc. Then I duplicated one mono raw guitar signal track and routed one to the empty group track and the other directly to the output. When routed to the same output, with phase inverted the two tracks are silent. When routed to different outputs (one to the empty group track, the other directly to the output), inverted phase does not make the tracks silent without moving one of them. And in my case the difference between the starting points of the tracks must be the same ~22 ms.
When playing back in Cubase, everything feels OK, but when rendering or exporting, group tracks seem to cause some time shifting of the tracks which are routed to the group track.
The final problem is that I have poor syncronisation between tracks when exporting an audio mixdown. I tried realtime export, different formats or bit depths, switching options in studio setup (such as adjusting for record latency and others), no luck. So here is what I found. The only solution that may be is to render these group tracks and then move them 22 ms farther, but this is ugly. What is the reason for such behavior?
There is one more thing I have found.
The distance to make the tracks silent with inverted phase seems to depend on the output latency. In the example above it is ~21.9 ms. When I increase the buffer size in the control paned so that, for example, the output latency is 33 ms, this distance to make the tracks silent is also at around 33 ms. This may be evidence that group tracks have the same latency as the output latency.
Does disabling ASIO Guard, do anything.
How about changing the audio engine to or from 32bitfloat to 64bit DP, any change ?
I’m a bit stumped, hopefully someone else has more ideas.
I’m finding this is an issue with macOS 14.1 on an M2 MacBook Pro including with:
Render in place
Offline bounce
Instrument Freeze
Things just don’t line up and don’t cancel out and even have completely different wave forms from one method to the next. Render in place with DRY settings.
Eg.
If I freeze an instrument:
Bounce that instrument from it’s track offline
Real time bounce
Compare the two - they are not the same.
But realtime bounce doesn’t always solve the issue.
Very hard as I’m working professionally and this is costing me days to print and check drum and synth tracks, and frankly I’d like everything to stay the way it was recorded or performed unless I change it.
One thing I’m thinking is, could this be due to the “drift correction” feature on the Mac OS audio midi setup “aggregate device” setup?