So thanks to help here, I’ve been able to get VST Live up and running in a Dante system. Latency is about 5ms so should work get for FOH and live monitoring all channels except for likely vocals/IEMs. I’m trying to understand the best way to accomplish my goal of using VST live to process vocals, guitar and drums. I used to use Cubase for this purpose to good success. With cubase, I was able to use automation curves to change effects, levels, etc and could basically mix a track and then send live monitored stems out to FOH to make minor adjustments based on the room. I see there are Stacks that can carry VST effects. Let’s say I wanted to have a song that had a vocal verse without much verb/effects and then a chorus with doubler, harmony effects, etc. Having several sets of these stacks would use many more VST instances than the computer could handle at 64 samples. It occurred to me you could also use a set of global channels loaded up with the basics (primary source expander, channel strip, comps, eq) and then have separate effects applicable to different parts but that doesn’t allow you to adjust volume. If anyone here is using Live for similar purposes (I want to live process 3 vox tracks, 2 guitar tracks, a bass track, and 11 drum mics) and has some recommendations based on their experience, I’d love to hear it. I understand how I could handle MIDI instruments in multiple parts but I’m more interested in using Stacks rather than layers and I just can’t wrap my head around how I would get the effects automated for live use. Thanks in advance for your thoughts!
Only active plugins ever process audio. Layers and Stacks - except the global variants - only process when their Part is active, Track inserts are only activated for the active Song. So you don’t loose any processing power for plugins, channels etc that are not activated, only memory.
While automation is still in the making, you can use Virtual Midi ports to control targets by placing MIDI events in a MIDI track, send those via one of the Virtual MIDI Ports, and program Actions (see “Devices/Actions and Shortcuts”) receiving those controllers on the same Port number, for whatever you want to automate, like Stacks channel volume, mute, fx enable, or quick controls for its plugins.
hi @h20fam ,
(I’m also not interested in using MIDI instruments.)
I think, you did a great job with your setup.
I would say, maybe better to keep it more simple. Send signals to Groups with inserts and set SEND volume or bypass inserts if needed… as normally we do it in studio when no need to complicate things.
I don’t think it’s absolutely necessarily automate fx’s for live prod, as you can quickly get lost. With groups where signal is sent to, you can keep track what is going on, adjust grp-volume or fx params (e.g. shorten reverb in a church, etc…) but you still using the same setup. Also by setting the corresponding song TEMPO/SIGNATURE tracks, your time sensitive effects will remain in tempo, no need to adjust them song by song…
If you see, what I mean?
Thanks for the thoughts. I like the idea of sending to groups but doesn’t that group tracks in a single output buss? Other than potentially with drums I need to send each signal back to my Yamaha mixer with Dante card. I then process IEM aux mixes and can control FOH from an iPad (I drum but I want the system capable of using an engineer if we’d like). I use the iPad to virtual sound check each song before the show.
I do like the idea of keeping it simple though.
Thanks. Would you recommend just creating multiple instances of a vocal stack or trying to create a global part and then somehow sending that part to different stacks to process effects? I’m concerned about having multiple instances of the vocal channel strip where if I change EQ or something I then have to make that change in multiple places in one song. I suppose I could save the effects chain and hope that it updates in all parts.
Hi! that soundz awesome cool ![]()
theoretically you can route group channels to another groups (if 4 SENDs aren’t enough) and SEND pre/post signal to further GRP or OUTPUT.
So theoretically you can send out all “pre-fader, but FX processed” ch-signal to different OUTs via ChannelSENDs.
If that makes sense, but to be honest, I think that such an advanced config, when we used to sit together with a coffee and making the solution ![]()
Do you want to do huge changings between songs/parts/inputs?
If not, I would also think about what if:
create GlobalStack / each-input (vocal1, vocal2, bass, keys, (kick, snare, toms…))
and route them wherever you want, furthermore you can route signal from AudioTrack-to-gbl.Stack if that gives you more flexiblity.