Hi,
I’m currently implementing a VST3 plugin host (using the bindings from the vst3-rs
Rust crate), and I’m at the point where I want to figure out parameter interaction.
Now, I’ve read the documentation on parameters, but either I’m missing something or the information seems contradictory.
In the documentation it says the host is responsible for sharing parameters between edit controller and audio processor, but I didn’t implement anything of that sort and, it … just works? Like, with both ProcessData::inputParameterChanges
and ProcessData::outputParameterChanges
being nullptr
, moving the knobs has an audible result. Feeding parameters into ProcessData::inputParameterChanges
moves the knob in the UI without me having implemented any interaction.
I’m mostly testing with JC303
, a JUCE
based plugin …
Is that due to the specific plugin-implementation and can’t be generalized?
Or am I misunderstanding the documentation?
Best,
N
JUCE based plug-ins use the same memory data for their plug-ins for both edit controller and audio processor. This was state of the art in the late 1990s and early 2000s where the roots of JUCE are. Modern plug-ins will have a separate data section for the data/parameters in their processors and the data/parameters in their controllers. So you should implement the input and output parameter changes mechanism as outlined in the documentation. And depending on the latency of the audio graph you can delay the parameter changes back from the audio processor to the edit controller so that the UI of the edit controller is in sync with the audio.
I hope this helps.
Cheers,
Arne
2 Likes
@Arne_Scheffler could you please expand this a little more? Is this something related to the SingleComponentEffect class that exists in the SDK?
When using Juce and handle parameters you see a lot of thread safety talking about the parameters and so on with atomics that you need to use for parameters in juce plugin…, but for a vst plugin created from the vst-sdk, it just seems that parameter updates is passed as argument to the process call? And in your plugin you don’t need to handle any thread synchronization for that. Am I on the right track here?
I’m not sure what you want to hear. But if you design your plug-in in a way that the audio processor is independent of the edit controller then you have two sets of parameters, one in the audio processor and one in the edit controller. As the host is responsible to sync both the plug-in does not need to take care of thread safety by itself, as VST3 has a clearly defined threading architecture.
If you look at JUCE, they don’t have this separation, and thus they need to handle thread safety by their own. I even don’t know if JUCE itself takes care about this, or if the plug-in developer using JUCE needs to take care about it. (I hope it’s JUCE).
Thank you!
But wouldn’t the thread safety been taken care of easily since the parameter is a float only, that you can use std::atomic if you don’t have the separation? Is it really needed to have separate data sections of parameters when we can use std::atomics to make the thread synchronization easy? I might misunderstand this and there are some other crucial thing that really need the separation?
Personally I have developed plugins using both vst3sdk and JUCE and must say I much rather prefer the way parameters are used and worked with in the VST3SDK. Juce parameter handling feels very clunky to me.
There are two issue to this approach:
1.) Using an atomic is not performance free, as the CPU has to make sure that its value is synchronized. Just think about a modern CPU with 20 or more cores where only 2 cores share a second level cache and your atomic value is accessed on two cores which don’t share the cache.
2.) In VST3 the parameters are synced to the display. If you write an output parameter change in your audio processor the controller will get this change synchronized to the audio. For example you have a peak parameter that show the output peak of your plug-in. If the DAW now calls the processor of your plug-in to produce a block of audio you will then send a parameter change with a value of the peak output value of your processing. If the DAW now outputs this block one second later to the audio hardware because of plug-in delay compensation, the peak parameter change is also send one second later to the edit controller of the plug-in and thus if you show a peak meter in your UI, it is synchronized with the audio automatically.