Constrained Delay Compensation (CDC) can reduce the latency of the whole project to zero, but it will also disable all the effects that have plugin delay compensation (PDC). This can affect the balance of the tracks and cause unwanted changes. To avoid this, I need to adjust a different set of parameters when I turn on the CDC mode.
Here is my idea: Introduce State Informing of CDC to VST4. And here is the workflow: When the host (Nuendo/Cubase/Reaper, etc.) sends a CDC on signal to the VST4 plugin, the plugin will switch to zero latency mode and try to preserve the closest effect as it had with PDC samples. For example, many plugins from FabFilter can change their look-ahead to zero and still function properly. If the plugin cannot perform its basic duty when the latency is zero, it should try to keep the volume level as close as possible to the non-zero latency state. And when the CDC is turned off, the plugin will restore its original settings and behavior.
I am not sure if this is a good idea, but I cannot think of any other way to solve the problem that CDC on creates.
If this requires a lot of effort for all VST3 plugins to evolve, maybe Nuendo/Cubase can support the “Shadow Slot” instead, and here is how it works:
Each plugin slot has a parallel slot for CDC on. The user can set it up for the purpose above. Nuendo will switch to the Shadow Slot when turning on CDC, so that the user can fine-tune the plugin to meet his/her needs.
This is why Pro Tools TDM was such a great success. Because it taksed a dedicated set of CPUs to figure out, super fast, how to process realtime incoming signals.
Since I have not used that for a while, by using a DANTE enabled interface, with around 4mS round trip latency, I have gotten used to a time honored tradition in audio engineering:
GET THE SOUND AT THE SOURCE. Like we used to when using tape machines.
Fortunately, using Nuendo, I can get very close to that, because the input tracks can track with plugins placed as inserts on them. I normally only use “color / character” plugins, like EQ and some compression, but never time based effects like delays and reverbs. For that, we can use Effect channels and sends to them. But still, I do try hard to get the sound before it hits the microphone or DI box.
I have not really had any issues so far, with overdubbing.
I do try to get ALL recorded material before going onto edit and then mix a song. Instead of having to overdub at the final mix stage, where the amount of plugins are at the maximum for the song, and can cause some trouble with latency.
Proper organization in pre production and production helps with avoiding these types of issues.
Thank you for your reply.
I agree with your advice that good preparation saves a lot of trouble.
However, I think software or tools are for reducing human work to produce the same thing.
Especially when a host supports CDC, it means that the user needs to reduce the latency for some reason. And also why some categories of plug-ins like (multiband-)compressors need PDC for better quality.
So I still hope the software can solve the side effects that CDC causes.
You can set an amount of latency that is acceptable for you, and all tracks and plugins with lower latency get not turned off during recording or if CDC is on.
The FabFilter EQ for example has zero latency mode and some stock plugins have a live mode or no significant latency at all. And they stay on if the latency threshold is set to 1ms.
And, if a Steinberg plugin is capable of a live mode (MB Compressor for example), it is switching automatically to it.
But, it is absolutely up to the vendor of the plugin to implement this. The mechanics to use it are already there.
What is this?
This often switches plugins from linear phase mode to no latency, and that changes the sound dramatically on some plugins.
It’s up to the plugin vendor to solve this. The software (the host) can only support that.