One major limitation of VST is that it cannot really “foresee” what’s coming next in an audio track. It works on one samplepoint at a time, without knowing the properties of the subsequent samplepoints. A workaround for this is to introduce some latency, thus giving the VST plugin some headstart to analyze a few samples in advance.
That being said, wouldn’t it be great if VST would generally be able by default to “scan” an existing audio track as a whole and then adjusting their parameters over time, according to that data? For example compressors, reverbs, or limiters would be much more accurate if the VST knew exactely what’s coming next, right?