Using resource-hungry templates is becoming increasingly necessary for composers. For that reason, many composers have begun using Vienna Ensemble Pro in combination with a Slave Computer. As of now, using plugins with the MIDI Track is time consuming and difficult, because putting a plugin directly on the MIDI Track will effect all of of the instruments on that VEP instance. In addition to that, plugins cannot be put directly onto the instrument in VEP because my dongles are divided, so that all of my FX plugins are on my Master Dongle for the purposes of mixing, and my VST licenses are on my Slave Dongle for the purposes of slaving.
What I am requesting is a MIDI track that functions as an Instrument Track, in that it has MIDI capability, as well as its own audio routing, separate from other tracks on the VEP instance.
Here is a step-by-step walkthrough of how workflow is disrupted because of the multiple-track option. I hope that you will be able to see why it’s so important that the MIDI Track has both MIDI capability as well as its own Audio Output.
Lets say you have a cello track routed in from VEP, using its own MIDI track for MIDI and an Audio Output for it’s routing. Let’s put distortion on the cello. To do that, we would have to (1) follow the signal chain to the Mix Console, (2) find the audio output (not an easy feat at all in the 700+ track template that I have), and (3) put a distortion plugin on the audio track. Now we want to gently raise the amount of distortion. We then (4) enable the write automation on the audio track, then (5) play the track and tweak the distortion in real time. It was close, but not perfect. Now we want to go in and make some subtle adjustments. We have to (6) go back to the project window and track down where that audio routing is located (also not easy), and (7) tweak it. But now I’m inspired, and I’m hearing how the distortion interacts with the cello, and I’d like to tweak the MIDI again. So I have to (8) track down my MIDI track and tweak it. That’s sounding great! But I’d like to tweak the distortion amount one more time (9) back to the audio track!
Another, smaller example: Lets say we’re using a synth plugin and we want to see how it sounds with a delay plugin. (1) find the audio output and (2) put on a delay plugin and (3) tweak it. If we play our MIDI keyboard, it makes no sound because the audio track is the selected track. So we have to (4) find the midi track and (5) play it. Sounds good, but we need to tweak the effect. (5) Back to the audio track!
If the MIDI had its own audio routing, we could skip half of those steps, and we wouldn’t even need to go to the mixing window because we could just use the Inspector in the Project Editor. Having grown up with Instrument Tracks having integrated routing, it’s frustrating and time consuming to have to add in these extra steps.
Eliminating these steps is extremely important, because both parts of the brain are being used- the creative and the technical, which wears us down more quickly. I take care at the beginning to set up my projects so that I get all of the technical stuff out of the way when I’m ready for the mixing phase.
I am not a programmer and know almost nothing about the underlying architecture of either program, but I have come up with a couple of solutions:
Coordinate with VSL so that changes in one program can communicate with changes in the other. This could result in the creation of a separate bridge program.
Use the architecture of the Instrument Track as a starting place, have whatever signal the MIDI is sent to return the audio into the track itself.
Create a VEP-type program that would allow us to use the resources of a second computer.
Whatever the solution may be, I’m sure the Steinberg team can come up with some amazing results. Thanks so much!