Dithering is only useful when intentionally lowering the bit depth of an audio file, for example when converting your mastered file from 96 kHz/24 bit to 44,1 kHz/16 bit.
If you do that without dithering, all the samples will abruptly snap to the closest bit value.
That will effectively alter the audio and change the waveform shape, introducing unwanted distortion/artifacts.
There’s actually no need to dither when going 32 to 24. The resolution is already high enough at 24 and the distortion will be exceedingly low and completely negligible.
However it is highly recommended when going 24 to 16, because 24 bits has 16,777,216 possible values, and 16 bits only has 65,536 possible values, which is 255 times less.
It means that when going 24 to 16 without dither, the distortion would be 255 times higher than when going 32 to 24 without dither.
Cubase (or plugins) internal resolution (32 or 64 bit float) only serves for internal calculations when the audio is processed. It can be anything, gain faders, panners, effects, etc.
Let’s say your recording is in 16 bit, and Cubase is in 64 bit float.
The 16 bit audio goes through an EQ.
When you tweak the EQ bands, all the internal mathematics will happen on the 64 bit range, and the resulting values will be rounded to the closest 16 bit values.
This causes no distortion at all, and needs no dithering, because it is only related to the internal processing, not to the actual audio resolution.
It is a total myth thinking that plugins will increase the actual bit depth on input, then lower it back to original on output.
Audio Processing and audio Conversion are two completely different things.
To answer the OP original question :
No need to dither when sending the audio through external FX.
If your interface is set to 24 bit and your audio has also been recorded in 24 bit, then it will always stay in 24 bit.