External Instruments, make it so they can share an input from the Inputs rather than steal

One of the reasons I use traditional MIDI tracks in and out, instead of External Instruments… is, External Instruments steal their assigned audio inputs from the Inputs section.

I only have 16 ad/das and they are fluid in what they are assigned to. ie, sometimes it’s recording 16 channel drum session.

The way I think this should be visualized, is, in the Connections window, and Input that is forwarded to something else, say an External Instrument… the colour of the text for that input changes to Orange, or something.

The same really goes for Inputs and External FX, there needs to be a way to just share inputs and let the user decided/use discretion how this is managed to avoid unintentionally sending the wrong signal to the wrong place.

1 Like

This would allow setting up an external multi-timbral synth as multiple devices, one device per channel/part, and using it on multiple instrument tracks.


I have to disagree. If an A/D pair is used for external inst, you don’t want to connect it to input and vice versa. Same for external FX, that is all to avoid receiving the wrong signals to wrong inputs and the current implementation around the audio setup is working.

What’s the point of having multiple returns when there is only one stereo A/D connected as a shared return? And/or the synth only has 1 stereo output?
Having that in multiple tracks on a project window doesn’t improve anything yet will only make troubles?

More flexibility in setting up routing and connections.

For instance, one could have an external synth set up as 16 MIDI devices, each receiving over a different MIDI channel and sharing the same pair of returns. One could then create 16 instrument tracks in the project and switch between them with the added convenience of rendering MIDI parts to audio in place, also keeping automation in place.

It will break PDC, that’s the primary reason for the function.

How would the setup I’ve just described break PDC? The PDC value would be the same for all the MIDI devices and the shared external instrument they’re associated with.

What if you inserted a big buffered plugin on one of the 16 instruments?

Because many multi-timbral synths have less audio outs than internal MIDI Channels, including those made by Yamaha (MODX, MOXF etc.).

The advantage in sharing the same output is that it makes the process of rendering to audio much easier. Otherwise you have to manually render to a single audio track and then drag that down for each of the 16 channels.

Studio One, you can select all 16 MIDI tracks and bounce. And it will give you 16 audio stems as it solos each track and real-time bounces.

OK, I understood the benefit. Then I think it is better to have the ability to set MIDI tracks going to external inst as targets of render in place, instead of having shared inputs. You should only listen to one of the shared returns at a time anyway because all the shared returns carry the same audio signal. And think about what happens when you insert a big buffered plugin on only one of them, PDC will be impossible in that case.

I really think the priority is to have a solid PDC, the current implementation is great in this regard including multiple outs from devices and the MIDI clock sent to external insts.
And if you want to compare with S1, the daw cannot align the timing of external sequencers but cubase can when used as external instruments, you can even insert big plugins to a separate out of a drum machine driven by MIDI clock and the timing will still be rock solid.

You can do multiple exports by hand, yes it’s sometimes painful, but doing latency compensation including MIDI clock by hand is impossible, it’s not just ‘hard’ but impossible in a lot of cases.

I just select 16 MIDI tracks and press ctrl+b in Studio One, and it’s done. No fuss, no workarounds - it renders out 16 audio tracks using the single output on my Yamaha synths.

With what you’ve written above there would only be the one return track - which would be the external inst target, So I can’t see that it would leave you with 16 separate audio tracks?

And therein lays the issue.

DPC is not even a consideration as it’s one MIDI port with a single audio return. The audio return would not be subject to different real-time inserts or such like.

For playback, you’d set the latency parameter in the external instrument plug-in. For recording, in addition to the latency parameter, you’d adjust the position of the recorded file manually.

I also brought up automation. Currently, automation for all the channels of an external multi-timbral instrument resides under the instrument track. One cannot automate under the MIDI track one’s working on. One has to reach for the instrument track instead. It’s quite a hassle in a busy project.

The purpose for my suggestion, was simply, so that I can use External Instruments Inputs without having to lose my regular inputs and always switch them back and forth.

I think PDC could still work as it currently does doesn’t it? why wouldn’t it?

-External Instrument Track → Out to Hardware Synth
-Input 1(a) Normal (goes to audio tracks)
:arrow_up: Thruput carries to :arrow_down:
-Input 1(b) Ext.Inst Return

The user would just need to use discretion and avoid “crosstalk” or unintended summing.

I agree!
Concerning the Yamaha Instruments: THe Yamaha driver allows multiple audio channels to be transferred to Cubase at once. THe problem here is that in cubase we cannot use more than one asio device at the same time. I would consider it extremely helpful to either allow EASY switching of audio-Interface in an open Cubase project (note: it would not be necessary to allow multiple asio drives at the same time, but only easily switchable in the same open project!) or - this would be the minimum: Have a total preset for ALL audio connections (and not only separate ones for input, output, control room, etc.).


I don’t totally understand what you guys are talking about, but if there are multiple needs for this FR then great!

Hi All - this FR is badly needed in my opinion. Not sure if I might expand and start a new reqeust… this thread got quite derailed by someone failing to understand the intent and usefulness of the request :frowning:

A few key points:
Letting users freely assign their inputs and otuputs as THEY see fit makes perfect sense. The current system is old, and was based on avoiding “an issue” at the expense of creating long / timeconsuming workaround for many other use cases. That’s fine,Ext Instr were pretty cool (but still restrictive) when they came out… it’s time to move on now though…

PDC is simply a delay… there is zero reason why an assigned channel in an ext. instrument wouldn’t be able to suitably match timing for a midi / input combo, even if that same input was assigned to a standard audio track elsewhere. The PDC isn’t happening at the ASIO driver, it’s hapening to the audio inside cubase. Just apply PDC to any ext. instrument’s audio, and you’re done. Should you use the normal input while your using the external instr.? I mean - no… it has an external instrument plugged into it on the audio interface… but using it as a mic channel is as simple as patching in a mic amp and rec arming an audio track… not patching in mic; connecting/disconnecting audio connections in cubase, and reversing the procedure when complete… which is the current method.

The current system seems to operate under a “fear of someone trying to use the input from an external instrument for something else”… like ANY studio, it’s pretty obvious when an input is connected differently to what’s expected… “hey… why is there a drum machine on the vocal input”… “oh yeah, it’s patched into input 9/10”. It is no different to using cubase audio channels with various input configurations which you currently can do (assigning the same input to a stereo or mono input simultaneously for example)… Let’s just give users that same flexibility in the way they use their external HW.

e.g. imagine if cubase restricted you to choosing if Channel 1 was Mic only. To make it line level you had to manually switch it in audio connections EVERY TIME. This is effecively how it feels to use external intruments/fx if you ever want to change them arond or use a different setup to last time…

What’s the beneift of this request to me (and many others)?

I don’t want to manually click around to reassign inputs and outputs every time I change a device I’m using or how I’m using it. I want my 16 i/o always connected and available for audio recording; and multiple ext instruments and configurations available by simply inserting an ext instr. track I’ve already set up (with all connections already in place).

e.g. Using a multi-out HW synth - say a multi-out drum module: I may typically use it with stereo outs, but occasionally want to patch separate outs to my interface. I don’t want to digitally reconnect everything temporarily just for the task - I want to simply activate the extra inputs in the VST Instr. (which are already wired up to correct / standard ins via audio connections).

e.g. In a track I’m writing, I’m using most inputs and outputs… I want to reserve i/o 1&2 for external synths. I use a stereo synth on a part… I like it and render the part. I then realise that want to use one of my mono synths for another part, so I plug it in, but it’s on a stereo input :frowning: . Why do I need to break my audio i/o connections to do this properly? Why can’t we be flexible with what we want to use and when?

e.g. Yes - some people have one or two extenral instruments that remain patched the whole time. This is what the current system is geared towards. What about other people who have a lot of instruments they want to swap around? Why not just let us create and connect ALL in the audio connections, sharing inputs… then we simply connect them to their inputs in the audio interface as required… no need to click around disconnecting and connecting all the assignments.

e.g. Why not let users decide when and how to use the audio connections? I might want audio I/O for my music on 8 mic channels usually, but then realise I want to quickly throw a poly synth on channel 1/2 and have it all nice and in sync… then I want to go back to my 8 Ch mic setup. Disconnect audio 1/2; reconnect audio 1/2… do this once you’ll live; do it a hundred times (as it’s your workflow) and you start banging your head against your console.

So, in summary, the current system works… but it’s solving a problem for a small number of users and creating another for many others. I’d love to see a system that is not restrictive like that… one that lets users tailor the system to their needs.

Either please let us use freely assignable i/o; Or please let us use proper save/load full configs for audio connections available (incl. external instruments / fx) from an easy to access location so we can preconfigure all possible setups and just activate them… I’d MUCH rather the first option (or a complete overhaul of how this all works so that it’s not restrictive like the current iteration).

Wouldn’t the preset and favorites system, that is currently available, solve a lot of your clicks already?
I understand the preset system could be better but I don’t disconnect/connect anything manually anymore. I just load another preset.

It’s another “close, but not quite”… Once you have an external Inst loaded in a project, it’ll not let you reload it to update input routing; and loading a second instr won’t override audio routing to ext instruments…

i.e. If I load an 8 input drum module into a new project from favorites it will ask me if I want to disconnect other AUDIO CHANNEL routings… “YES”… so far so good…

But I then load a new favorite that shares those inputs… Nothing… inputs are blank because the EXTERNAL INSTR (not an audio channel) is using them.

So it’s broken.

Also - the current system, even if that worked, won’t allow you to reload an external instrument to refresh audio connections… it just does it for the first time it’s added… so if anything breaks those connections, then you have to manually reinstate them.

My prefernce remains “please just give us full control of audio routings” / “don’t decide for me what I would like available and when”. My second preference (a far distant second) is "allow FULL audio connections presets… not just audio in / out but all settings you can config in there. That distant second prefernce would still suck (I’d have to make a million combinations of possble gear I’m going to use in a session to recall as presets), but it’d still be better than the current solution.