Adding the same effect to every track is the same as assigning it to the stereo out channel?

Is it the same to add a fixed EQ or console emulation plugin to every track than to add it just to the stereo out track?
From the CPU performance is not the same but I think that theoretically and from sound point of view it should the same.
track 1+track2+track3= stereo out, then
x track1+ x track2+ x track3= x stereo out

Theoretically it could be if the signal chain is kept to zero gain throughout (and a whole bunch of other potential variables don’t exist). Which realistically isn’t going to be the case as you’d normally set different levels for various Tracks in a mix. In practical use it shouldn’t really matter as long as you are making adjustments based on what you hear. I think it’s pretty unlikely you’d ever end up with the exact same results using the different configurations.


I dont think it has relation to track levels, cause if I raise a track level it will sound louder in the stereo out and the added effect will increase proportionally. Taking this thinking to the extreme if I raise enough a track so that its the only one I hear then adding an effect to that track will surely be the same as adding it to the stereo out.
I tend to think that, as you say, it shouldnt really matter in practical use but it does matters related to CPU load so that effect should be put in the stereo out or a group before the stereo out instead of same instances in every track. So it shouldnt have sense all that people that say they use a console emulation or tape emulation or whatever plugin in every track to simulate an analog recording EXCEPT if that plugin includes some randomness in each instance in which case there would be subtle different results.
I mean a 50 hz low cut filter on every track should be the same as applying it to the stereo out or not?
The same reverb applied to every track is the same as applying it to SO bus? This is digital world so maths rule, in the analog world of course it would be different.
I really want to know

I think you have to take into account if there is dynamics involved in the plugin. If you have a compressor that has a non-linear response and is pretty clean at nominal operating level, say -18dBFS average for example, and you place one of those on each of 30 tracks then you’re going to have little distortion of the sound on each track. But if you instead place the plugin on only the master and level compensate so you’re still hitting nominal level you should have less total distortion, right? Because it’s different to have say 5% distortion on one master channel compared to 5% distortion on 30 instances.

And if you don’t level match and just have a much louder signal on the output bus going into that one instance the compressor could work a lot harder and distort a lot more because the amount of distortion doesn’t increase proportionally to input signal.

So I really don’t think it’s the same thing.

Fortunately you can sort of test this. Just set up a test and export out your two alternate examples as mixdowns and then import both into a clean project, play them back at the same time, flip the phase on one. If you have complete silence they’re identical. If you don’t then they’re different. The only question then is what is different and to what degree is it different…

1 Like

I think it depends on the plugins. An EQ with interactive bands is going to affect the two track differently than it would on individual tracks, and a compressor would most certainly behave differently in the two scenarios, and not just with respect to distortion as @MattiasNYC has pointed out above.
Heavy bass frequencies (for example) in program material can cause pumping or ducking of higher frequencies. This is not going to be the same behaviour as compression of individual tracks.

1 Like

I guess you are right in case of dynamics, first because it wouldn´t be “applying the SAME” effect for every track as it would have no sense in this case.