Vocal Rider Automation

I would love a simple way to use the amplitude of the vocal track to generate an automation curve that I could apply to the vocal track fader (or apply to a different fader in the vocal chain via copy / paste).

I think the key would be to calculate the current loudness level of the wave file vs the average volume of the file to generate the automation curve. Perhaps a ratio option would allow the user to select how much variation to produce in volume in the automation.

This would be a big time saver to generate a base vocal “rider” that may be more accurate then those available as third party plugs today.

I like this idea. It’s basically an envelope follower. Once you have the envelope, you could use it as a vocal rider, but you could also invert it for ducking, or use it to modulate another instrument. There are many possible uses for that kind of curve.

I once tried to do it with dual side chained compressors. A constant level/pitch signal was sent thru a compressor, with a variable signal input on the side chain. The result should be a tone that varies only in volume, capturing the opposite of the volume level in the original variable side-chain input. This compressor output was then sent as a side chain input to a 2nd compressor. The normal input of the 2nd compressor was some VSTi. Thus, the volume envelope of the part that goes into the 1st compressor as a side-chain is imposed on the normal signal passing thru the second compressor. In my case, the 1st compressor side-chain was an electric guitar and the 2nd compressor VSTI was a saxophone. The intended effect was the same as a MIDI guitar controller operating a sax patch, but done with audio instead of MIDI.

Project dssc failed because the envelope following was too sloppy. Transients went missing. Each compressor introduced timing errors. Even at zero delay and fastest attack, the tracking was far too loose. It was a spectacular mess. It naturally occurred to me that compressors are not designed to do this sort of thing, a fact I should have appreciated before attempting the whole stupid thing.

Anyway, it brings up the problem of how to make a volume envelope that’s responsive enough to be useful. Clearly the guys at Waves have pondered the issue. Their vocal rider sometimes shakes quickly and violently. It seems like a problem best solved at a batch process (not real time). It also seems that you’d want to control its sensitivity and smoothness. Designing something of general use seems like challenging problem to me. Still, it would be nice.

Exactly my thinking Colin. It would seem to me that you could take the mean or median signal strength of the waveform and determine the # of DBs difference and use that to plot an automation curve. You could also allow the user to pick how many beats or bars to use as the average sample area, allowing the curve to be adjusted to a smaller or larger area accordingly. In theory, this would give you a very accurate curve to adjust the volume of a signal to smooth it out before hitting the compressor. The #1 complaint with vocal rider is that the automation is not accurate enough (too laggy). Being a local process, this would fix that issue.

+1