I don’t have BBCSO, but the reverb I’m most often using for algorithmic tail is Cinematic Rooms Pro, where I’ve been using the Alvin Wee Invisible Hall preset. I have that on an aux bus in VE Pro where I send to it from my other channels. Not too too much - just enough to smooth over a few of the rough bits.
I would use a different preset if I needed to add more hall to something (to make something recorded in a smaller hall match with, say, AIR studios), and then even the default Amethyst Hall in CRP is fine.
I have two other tail reverbs as well, VSS3 and HD Cart, and I add a little bit of those depending on the instrument.
I find Cinematic Rooms Pro is powerful and I use it a lot. It produces very nice reverb effects and protects potential cracking (too loud) passages. I even use it to amplify, boost bass sounds and add tails to piano pedal.
If Cinematic Rooms Pro is used with the NotePerformer in-stock sounds, one may find the in-stock sounds actually do not sound bad.
I am still learning how to use Cinematic Rooms Pro to produce more realistic reverb. It doesn’t have a position concept. I am looking for tutorials or courses on using it with orchestral music. That said, I like the reverb it produces more than (realistic) convolution reverb, which sounds boring to me.
By the way, do we go off-topic too far?
There’s always a place for both reverbs. Algorithmic reverbs are able to modulate a bit to add some life rather than being a “fixed still sonic photo of a room” like convolution, and convolution reverbs often have resonance issues where they pick up certain room modes and this causes an amplification of those frequencies like would be amplified in the original room, so you often have to do some corrective EQ to deal with these.
At the same time, convolution reverbs are indispensible when taking a dry source and placing it in a room, with algorithmic tail then added on afterwards. Even though their “still photo” of the room doesn’t modulate, it is still quite complex, potentially more complex than algorithmic reverbs can manage, due to odd shapes of things in real world halls that algorithms may not perfectly match - a mathematical model (algorithmic reverb) may not have the oddities and flaws of a real room that add character and realism. With recordings in ambient halls (like most libraries have) it is less critical to use a convolution component (as you have the baked in “real room” it was recorded in) and in many cases I only use algorithmic tail for ambient libraries.
I do have a fourth convolution tail which has Synchron Stage, that I sometimes add to instruments recorded in other halls to give them a touch of VSL Synchron (since most of my instruments are from Synchron).
3 Likes