Tracks with MIDI and Audio

As it is, if you want to use a Virtual Amp, or a many other VSTs, you have to rout a MIDI and an Audio track to the track with the VST separately. It would be really nice to have all of this as one track. Under the covers it can be separate track objects, but I really only want to see one track in the view. This way too, you could use midi learn for more than just 8 CCs without having to map them to quick controls.

While you are at it, make inserts able to use separate Cores and bypassing inserts independently assignable to a CC.

And allow CC info to be saved as CC info and not just automation.

Example:

Folder
Audio Performance Track
MIDI Note Performance Track
MIDI CC Performance Track
Instrument Trak (Or Primary FX track for an amp modeler)
FX 1 Track
FX2 Track
FX3 Track
Audio Track Result

This could all be one track with regions for the performance and result, and the inserts on all of the FX just inserts.

Isn’t this what an Instrument Track does? How does what you are proposing differ from that?

The real issue is the other way around. A Neural DSP amp sim is not an instrument, (it is in Logic, but it doesn’t make a difference) it is an insert, and it needs MIDI to control the pedals, same goes for any sim.

You have to make a separate midi track and send to the insert on the audio track. Normally though you want the recording of the guitar to be clean, so you send the performance audio track direct to a group, which then sends to an FX track with the sim, and maybe to more than one FX track in parallel, and maybe to a chain of FX tracks, because the CPU performance is surprisingly better when you put heavy inserts on their own FX track. You rout those to the audio track you are recording on. (I call that the “Tape” because it gives an analogy that is easy to think about) So now you have MIDI and audio all going the same place at the same time.

An instrument track gives you the ability to record midi on the same track as the instrument. But that doesn’t result in audio unless you send the audio from that track to group, or bus, and then to the audio track. If you are interested in the audio file, not the midi file (so you can turn off the instrument when the audio is created), then you have to do it like that. This is how a lot of composers work, and how you get massive templates with everything you need in them, you move everything to audio as you go. Maybe record in place, or maybe render in place, either way. Recording as you go is more efficient and simplifies the scripting (macro/PLE) in the template. Wouldn’t it be easier if this were all one thing?

The performance requires CC input (and notes for expressions usually but you can use CCs with expression maps), and that isn’t always to the instrument track. Sometimes it is to an FX track that the instrument sends to. So again, you have Audio and Midi always taking the same path.

I almost always have MIDI and Audio going the same place at the same time. There are only ever cases when you don’t need the Audio. There is never a case when you don’t need MIDI. Well, maybe there is if for you, and I should have said there is never a case when -I- don’t need MIDI. I don’t mic drums, they are MIDI, (except when they aren’t but then they still are MIDI too) I don’t mic amps they are virtual, I don’t mic anything but vocals, and then you also need MIDI. I guess there is also various acoustics, but generally those are to make sample instruments, and even then I need MIDI. In a full “in the box” setup that is the way it works. You always need MIDI.

The traditional solution is Quick Controls, but with guitars you tend to need more than 8. Not for a single song, but then the setup is different for every one. One needs to be able to have it as simple as a pedalboard and that means like 10 on/of and two variable, and Quick Controls are not enough. Likewise, it is good to have similar controls in similar places. So the effect you are controlling on a vox kit needs to mimic that on the guitar, otherwise you are fumbling around trying to figure out which one is which all the time. You don’t want to be in the middle of a take and have to read, or have just switched from guitar to vox and do the wrong thing.

With composing, you want the same expression on the same notes (or CCs), and the expression maps solve this nicely, but you still need all of that MIDI coming in over a different channel than the actual notes so they never clash, so again, you are routing MIDI. If you are playing strings on the left hand flautando and reeds legato on the right, then you need the routing to represent the difference, otherwise you get it wrong. That can be frustrating. So you can’t get away from all of the routing concerns all together. But if you could treat the non-note input differently than the note/aftertouch etc. then it helps, and expression maps allow you to do this. But imagine if you could tell the track where the notes were coming from separately from where the expression was coming from, then you could set up each expression map the same without concern of which hardware setting was going to be used. You could tell the track instead. It wouldn’t have to be built in and locked in to which hardware was being used, set up for which channel.

What this means is it would be nice to have every track have expression/note/audio as separate settings, and rout them all together.

Maybe instead of a long general description of all the different things that you want to regularly automate, just using a short simple example that demonstrates specifically what you think should be implemented and why would be more useful. After reading this I’m more confused than ever as to what you are looking for.

1 Like

You can use the generic remote to map direct to the amp sim, or such like. But when slot order changes it all falls apart sadly.

I get your problem too, as i have a midi floorboard and I like to route it through to guitar rig and be able to stomp pedals in and out, and use the wah pedal as real hardware.

It’s a little cumbersome, and I just tend to map what I need on the fly now… Otherwise I spend so much time tweaking and setting gear up, I forget to do what I love (Music!!).

1 Like

If you look at the images I posted in the other thread you can see that nearly every “channel”, “Kit” has 3 performance tracks, MIDI, CC, and Audio. The Kit is all wired up and ready to go, and exported as a Track Archive. That way I don’t have to do it again. The macro/PLE model will turn FX on and off (as well as setting record modes) and I have launchpad with 5 buttons to do that per “channel”, When I change out the presets etc. I save the new “Kit” with a new name.

Say I want to use Tim’s NDSP which can take MIDI notes, then I can record the guitar without it, switch to “MIDI Dub” mode and play in the chords in the next cycle… or the next as it goes. The audio from the clean “performance” track keeps running so all I am adding is the MIDI notes to the FX track, and therefore to the “Tape” track.

People keep saying that we don’t need non-linear in Cubase, that if you want that, “go use Live”. But try the above in Live, with only 12 “groups” to work with. Live is limited in ways that Cubase isn’t. Max is great, but solving this problem in Live is thankless. It can be done in Logic, only the macro/PLE technique isn’t there… the last time I looked… (Mac hard drive crashed and Apple wouldn’t let me install any valid operating system on a perfectly good machine :rage:)

Besides, I don’t always want to work this way, Cubase is the best. Seriously the absolute best, even without grooving non-linear tools built in. I still managed to get a half baked version that works for me… somewhat. And for composing it just can’t be beat.

If I can define tracks, a naming system, some macros and PLEs and get this MIDI/AUDIO thing solved, then they could provide that without the hassle.

If I can define the same, and do some MIDI wrangling and build a somewhat usable groove tool, then they can provide it fully baked in to Cubase.

Neither of these are really a big request then. It’s mostly UI because the Objects to get it done are already in their codebase. How awesome would it be if you had one track for guitars put your inserts on it, said which MIDI inputs and… “Expression” inputs you wanted and which Audio input you wanted and that was it? Imagine that it used the Processor’s cores intelligently.

Imagine that you could “Freeze” (maybe call it something else) that track in real time, and still have access to the audio, because it had already been recorded!!! No need for the system to freeze up while it disables or writes audio, just unload the instrument or effect or whatever from ram (amp sims should really be instruments btw).

Everything would be so much easier and so much smoother that way.

My setup does everything but unloading the instrument from ram, and if you knew that the audio in question was right there in one of the 3 regions of the track, the engine wouldn’t have to worry about the playback, it could just keep going and do the unload in the background!

Raino,

Here you go:

(1) Make this all one track.
(2) Provide three inputs and outputs to the track, MIDI Note, MIDI Expression, Audio
(3) Allow all of the inserts to be on the track inserts with separate routing.
(4) Allow those inserts to be activated or deactivated separately, and assignable to key commands.
(5) Properly use the multiple cores of the processor for the different inserts.
(6) Allow the midi portion, instruments and inserts to be disabled with no delay as the Audio is always recorded along with Midi and Expression.
(7) Allow access to the midi and audio when “disabled”.

Take This:
Guitar kit example simple

and make it this:
Guitar kit example simple 2

I agree with a major portion of what OP has on his wish list. Meanwhile I have a one word solution.

Bidule, and it sounds like you’d also like the 64bit
audio discrete processing compile.

You could get every bit of that routed into a single instance of Bidule, and set up any VST params you want, or go all direct CC if you’d rather do it that way. Load up one of the multi-output/input variants as a VSTi. Enable side chain. Boom, you can route any audio you need into the instance. Host all sorts of effects (Non Cubase only stuff…you’d have to run the stuff that comes with Cubase in Mixer inserts, but if you go with third party plugins for the FX, yep, could get it all in one instance and never even touch a mixer slot)…parallel, serial, sidechains, whatever ya need, whatever you want. Remote control it all a dozen different ways.

On processing, if you use multiple outputs you can get a better handle on multi-threading out of the one instance. There are also discrete processing bidules for a lot of operations in there that you’d like to do independent of the audio clock. 64bit compatible too.

Bidule allows you to register a VST param for whatever is hosted in it that provides them. I.E. You open a list that shows everything registered in bidule, and link it up to what is shown in the host (can name it too). Also has CC2parm and param2CC bidules if that makes your life easier.

Need different instances to cross talk and exchange some data? OSC server/client provided, or run some virtual ports inside when you don’t feel like messing with OSC.

1 Like

for this kind of routing you should check out Reaper…