Those of us who do Analog Summing need a matrix based on the same engine as the External FX.
If you create an external FX (mono/stereo out-mono/stereo return) Cubase will measure the latency, if any, and it can be inserted on any track while latency compensated, if any latency is present. Cubase has the ability to ping and measure that.
Now,an analog summing unit will usually need 4-6-8 Stereo OUTS and 1 Stereo IN. And it works if we just route the outputs-inputs, but when monitoring it’s highly dependent on the buffer size, that basically means a lot of latency which is not compensated at all by Cubase.
If Steinberg would add an Analog Summing section in the Studio->Audio Connections menu, where we can add, say, 8 stereo output busses that go into the summing device and 1 stereo input bus back from the summing device, where we can measure and compensate the latency between the outputs and the inputs, monitoring is auto enabled and signal is always passed it would be amazing! I’m talking about the exact system already implemented in the External FX, but with more outs and with dedicated busses for that.
How about it? All of us using a summing mixer are having these problems, Steinberg, please help us out!
Those of us who do Analog Summing need a matrix based on the same engine as the External FX.
Hmm…I would say analog gear does not have any really measurable latency - comparing to digital one. As you cannot monitor through DAW when purely recording from mic (let`s say) when your Audio buffer is set to higher values then the same applies to this “rounded” connection (I mean recorded track(s) out to the external analog gear and back into Cubase). Rendering/recording will be in perfect sync (Cubase does it automatically). But you cannot use software monitoring due to the audio buffer size. I do not think there can exist any SW part which is independent of Audio buffer settings.
I’m sorry, Pavlii, you are wrong. Here’s a video that proves that: Use Analogue Gear Like Plugins In Cubase! All about External Effects! - YouTube
No, I am not wrong. External FX definition does not mean “external HW FX without latency” - there is always the input and output latency (based upon Audio Buffer settings) but these ones system (Cubase) compensates automatically.
Do you understand the meaning of Latency Compensation? Do you understand that the point here is to have the visuals synced with the audio coming out of the speakers? Do you understand that the point of this whole idea is to be able to compensate latency between the send busses and the return bus and have Cubase compensate for that latency by starting a bit later so we can edit the automation? Do you understand that you’re not being productive at all by trying to argue with me on 2 different platforms without even reading what I just wrote in the feature request?
Wow, some real misunderstandings… or at least the inability to express the own knowledge in English
I know that can be hard sometimes…
I will try to clarify some things…
some know already how to do…
you can send track outs to hardware outputs already… and if you need to, you can do this with groups as well… there are still people out there mixing with real mixers… hardware you know
and you should use input monitoring to listen to the returned signal
summing needs no inserts and the measured delay is applicable to this case as well, but normally Cubase compensates the delays introduced with re-recording summed outputs… it’s the same as recording an instrument synced to the playback
It doesn’t for external insert processing, only for the re-recording, not sure what you referred to
That’s why you can measure the delay introduced by the analog insert (mainly the D-A-D conversation and the time for transmitting the audio to the converters and involved devices) and after that, Cubase takes the measured delay into account, but only if something was measured and only for each created insert…
Insert processing is part of the playback system, so this is compensated before you send something to the external summing (if measured)
with this system you can compensate for different inserts, Dante or Waves Soundgrid comes to mind…
that’s not the point…
I guess you didn’t understand the need of latency compensation…
You are wrong, dude. As soon as you hit the track monitoring button you will get the buffer size delay(latency), which in my case it’s around 43 ms, because we’re mixing, not recording. That means that if I want to ride the faders I’m always 43 ms late. And no, ASIO Direct Monitoring doesn’t work on MacOS, because that would have been your answer. If the visual playback starts 43 ms later I will be able to automate properly, no? Is that not latency compensation?
Furthermore, I already use the setup you’re talking about(with outputs-input), and no, Cubase doesn’t know that my ADAT roundtrip has a .35ms latency, and it doesn’t know that those outputs that I use to send the signal into the summing mixer is connected to the input(return) of the summing mixer into the DAW and can’t measure any latency. In the External FX way of doing it literally pings the send to the return and measures it.
Should I add that every single time we go this route we have to first print the summed return bus instead of being able to export it directly through the Stereo Out?
And I can monitor directly through the summing mixer, but that would skip my control room and my room correction plugin so there’s no point.
have a look…
Your brain can interpret 25 pictures per second… so your latency introduces a delay of just one picture…
Ask Pavlii, I already showed him in a video. And, again, thanks for the irelevant "Asio Direct Monitoring’ topic, you’re really just here to show me how smart you are, aren’t you? Maybe read the part of my reply in which I told you that IT DOESN’T WORK IN MACOS!But wait, what happens if I dare put some plugins on the print track? Oh, yes, more latency between the tracks and the summed print track, right? Instead of telling Cubase that we want all of them synced, it’s better to leave it like that, right? And, again, I presume you like the workflow in which you have to record the print the track before exporting, right?
I know, I’m full of bad ideas and all I want here is something that won’t save anyone time or patience for workarounds and stuff like that.
This is simple wrong…it depends on the interface used, but you never mentioned your kind of hardware… so maybe it’s not possible in YOUR setup
and you trying to show how stupid you are?
again… if you re-record the external summing Cubase compensates the latency already…
we, @pavlii and I try to say that your idea leads to wrong results…
the problem is that you want to cascade outputs with two different latency settings that Cubase can not compensate… even with external FX settings not…
It would probably work much better if you actually gave INSTRUCTIONS step by step to do it the way you understand it. From there the proof is in the pudding so to speak and you could avoid all this bickering BS. Teach a man to fish if you are correct here
The problem as I understand it is, that he thinks the delay compensation is for visuals… to sync the visuals (waveforms) with what you hear…
What should I say other than that it is not made for this, it’s made for keeping all audio signals in sync…
to allow phase coherent summing
As far as I understand one rant he wants to export an analog summed mix with real time fader movements and want to be able to monitor with Cubase…at the same time, without audible delays
and he wants some automated delay compensation without the need that Cubase measures signal paths…
His problem is not actually clear, maybe it turns out that he needs a better monitoring solution for his outboard summing since his interface doesn’t support direct monitoring…
Remember that ASIO only works with one device. If you are hooking up your audio device to external converters, that reported delay will be wrong. IN my case, my RME cards report to cubase a delay of 32 samples in and 32 samples out. If I do a loopback test through the internal converters, the recording will be sample accurate. If I am bouncing through my external converters (ADAT lipepipe) the recorded signal records exactly 64 samples early. I use a plugin (Voxengo Sample Delay) before sending to my external converters to compensate.
So, let me explain again why and what it is for.
First of all, @MixterRader is right, Adat does generate a certain latency. I’ll give you an example right here:
1- Blue event = initial Kick track
2-Red event = printed kick track after it went through the summing mixer (routed to first stereo channel of the summing mixer->master out of summing mixer routed back into cubase).
Excepting the phase shift there, you can clearly see that there is a difference in timing, a delay on the printed track.
So what I’m proposing here is a Output-Input matrix with the ability to measure latency (by pinging) between the output(s) and the input, one made especially for the purpose of summing. Basically what the External FX inserts do, but in the form of a BUS, so we can route more tracks to an output, and the ability to add more than one output busses with only 1 stereo in. As an example of my own configuration, 8 Stereo output busses (Drums-Bass-Guitars-Keys-Backing Vocals-Lead Vocals-FX-Misc, as in other samples that don’t fit into my established categories) and one Stereo Input Bus (Summed Mix). I use an RME Fireface 802 as my main interface with 16 adat i/o to my Allen & Heath Zed R16 mixer. The busses currently would go out through the adat pipeline and back in through a stereo input on my Fireface. If I can tell Cubase “hey, these go in and out of the same device with the purpose of summing, so ping for latency and compensate for it” it would be amazing.
The second thing: I would like to skip the “print” workaround of the summing process and not use the classic new stereo audio track with monitor on to hear the mix. In current normal circumstances we have to record summed track before exporting. In MacOS circumstances Asio Hardware Monitoring in Cubase is not available for most interfaces, which means that every single time I listen to the summed mix before printing I have the audio buffer latency (which in my case is 43ms). Of course, some will say that I can monitor directly from RME’s Totalmix software, and it’s true, but I won’t be able to use the Cubase Control Room, on which I have a room correction VST insert and I won’t be able to use Cubase’s master meter or other meters in the Control Room(loudness, phase correlation, etc).
Basically this means that, if I’m monitoring from Cubase with the monitor button on the Print Track, I see the audio kick track hit earlier than it hits when it comes back into the Summed Stereo Print track in Cubase’s Mix Console, the playback cursor in the project window starts earlier than the sound I hear in my monitors. This makes it impossible to do accurate automation fader movements.
The solution here is: Cubase takes into account the output-input latency of the Summing Matrix (in my case 43 ms) and does a lookahead on the graphics by telling it to wait until the sound reaches the monitors so it can compensate for the delay.
And lastly, if the Summed Stereo Track isn’t a regular audio track but a BUS, we wouldn’t need to print it before we can export the Stereo Out file and it monitors the incoming signal by default.
Hope this gives you guys a better idea of what I’m proposing here. If there’s anything not clear, please do not hesitate to ask.
Hi CeZar - I think you are misunderstanding the entire concept of external summing. I read your post about having this latency compensated send/return bus matrix for external summing. Having run my own hybrid setup for close to 20 years, I would like to invite you to think about your ‘source’ DAW just like a tape machine where you playback each track, then essentially mix through an ‘analog console’ and then arrive at the 2-bus mix, still in the analogue domain, which then needs to be printed. You could print it digitally or onto tape, digitally using a hard disk recorder, or SD disk recorder or indeed a computer/DAW. The point being that your computer/DAW is NOT the same as the playback tapemachine/engine you’re using your main DAW for e.g. the DAW sending the individual busses into the analog domain to be mixed on the board. For room correction, etc you should get another computer that runs a dedicated VST engine (like a DAW or Vienna Instruments Pro, or there like) that then transmits the 2-bus coming from the analog board to your monitors for you to listen. No need to compensate for any latency or anything as there need not be any correlation between the source DAW and your monitors. If you move the digital faders in your ‘source DAW’ any automation is latency compensated of course, and if you move any faders on your analog board, well, there is no latency. Any latency generated by your digital monitor chain has no bearing on the printed two-bus e…g final mix. The point here is to mult the analogue board 2-bus into the ‘master bus’ which you record and the ‘monitor bus’ which you monitor through (and might have to put through an additional AD/DA to incorporate digital room correction).
I have run this process successfully for over 2 decades.
I see the theory in what you propose but in practice using a DAW both as a multitrack tap playback machine and as a stereo recorder is circular and wouldn’t be practical.
Just my 2c.
@CeZar Just wanted to say nobody is understanding your ask here and it would be really great if Cubase could do this. I’m really astounded at the negativity as well in this thread.
In my scenario, I want to process vocals outside of the summer as I do not want them hitting the instrument bus compressor after my summing mixer. So vocals stay in a digital bus, and instruments go to the summing mixer.
With zero plugins on, there is very little time delay noticeable between the instrumental bus and the vocals. As I add plugins, Cubase starts delay compensating. This causes my vocals bus to get more and more “ahead” of my summing mixer. I may be wrong about the reasons why this happens, but it does happen. I know because one of my earlier attempts of fixing this was to introduce a delay plugin for the vocal bus to keep it “in time” after calculating my round trip latency. As I added plugins, it would fall off again and I had to recalculate.
All were asking for is something like an external effect that accepts sends and offers one stereo return bus, on its own track type, that uses Cubase delay compensation. The compensation that already exists in external effects. All of the pieces are already there.
I know Nick, I seem to be barking at the walls here, most of the guys that replied to me either considered me stupid or just dismissed the idea. There are MANY ways in which this could help, and, for the record, something like this would enable ACTUAL hybrid mixing, not the “hybrid” definition of mixing from today’s world (aka DAW=Tape machine). What you want to do is but a perfect example of a situation in which you don’t want everything to go into the analog domain, but want to sum just these 3-4 -10 busses and leave the vocals in digital. Another simple application for this is if we maybe want to use the DAW as an FX Processor and have busses from your analog console send signal to a reverb/delay/whatever in Cubase and then send them back out into the analog domain. There are many applications for this, it would literally mean integrating daw with analog consoles a lot more than it has been done until now. Sadly, we’re the only ones noticing the potential for this whole thing, and yes, it’s already in there with the External FX thing. They just need to integrate it from that into a new construction with more outputs and one input that is latency compensated.
Thanks for confirming I’m not an idiot, @Nick_rage .
I have had the same issue, and setup - and I compensated that, with a somehow “cheap” trick - and bought another summing desk (the SSL Six) … where I bring together the Music after the Suming mixer and Bus Compressor - and the Vocals.
Since the Bus Compressor is Outboard - there es no delay happening anymore- and I have a little “polishing” through the SSL summing.
But indeed I feel a little loose, with this Delay compansating, in Cubase, when I use the External FX as summing outs. - I had put theese Outs to my Busses (which included Plugins) and the delay war begins … so the vocals where out of sync… and the keys… and so on… everytime other times (you can see it in the mixer) …and was hoping for a solutien.- now I make another 6 goups, where I rout th original groups- so that they don´t include any plugins, in hope that this works. - The vocal is better now. but I will test it.
Look, I really hate to bring this thing back up again, but it is 1: a shame this doesn’t get up voted, 2: I’ll bet you that whatever daw developers see this topic, they’ll try to add to their own. Will this be revolutionary somehow? Probably so. Would Steinberg market the crap out of this if it actually implements it? Of course. Will it mean more integration for hybrid mixing as we know it? Again I presume so.
What I find astonishing here is the fact that mixing engineers don’t understand what I’m asking, even if I cared about the visual latency compensation part more than the audio, the last 2 responses are literally the proof in the pudding that it’s freaking 2023 and simple tasks like throwing a part of your mix into the analog world and keeping other parts in digital is still hardly achievable because literally nobody thought about doing it without delay plugins and other hard work which shouldn’t be necessary. Having this matrix idea implemented would only make this whole process flexible.
The way I mix nowadays on my hybrid setup is sending all my channels , after I apply channelstrips in cubase, to 32 channels on my D&R Triton inline console and get 8 groups back in. I literally start my session going through summing before I apply any plugins because it affects the way I hear things. Can I do accurate automation while doing it? No, but this apparently doesn’t matter for a few here who told me that I don’t know what I’m talking about. Is it REALLY necessary in 2023 to waste another few minutes to print those 8 groups before exporting? It shouldn’t, a realtime export should be enough, shouldn’t it? Will a feature like this help other types of setup apart from mine? Yeah, it probably would and the proof is in the pudding judging by the last two replies. Is this feature request getting upvoted? Unfortunately not.
Mark my words here, THERE WILL BE ANOTHER DAW COMPANY WHO WILL IMPLREMENT THIS, and that’s the moment when ya’ll are gonna start asking for it because it will be revolutionary as far as hybrid mixing setups are concerned, and more and more people are going down this road.
Will I get any credit just for the idea and easy implementation, because it’s already there, looking at us through the “external fx” feature? Of course not, but I’ll bet you that if I start making phone calls to Avid, Presonus or other companies and explain what I have in mind, at least one will go with “great idea, needs to be implemented, I definitely understand a potential in sales as far as hybrid mixers are concerned”,
The way automation works today in hybrid setups with external consoles is by using monitors connected directly to the console (so that’s a second set, after your pc) and super expensive motorised faders, because it’s the only way to move whatever you hear in that moment without having any type of buffer delay. This would have been a lot cheaper to develop and solve a big problem.
In conclusion, please get more people to understand and upvote this feature, maybe we’ll see it implemented some day.
I have until today, the same problem , the only solution I have found until now is to add latency to the output busses that goes to the summing mixer with a plugin like the Voxengo Latency Delay, its free by the way, so the cubase will compensate the project every time you will push play and everything it will be in perfect sync. Note that you can stack as many plugins as you like in the case the maximum latency of the plugin has reached. In my case I need 229.5ms , so I had to stack three in every output bus. Check the screenshot.
The obvious downside of this solution, is the more delay you add, the more time the playhead will take to start, but the sound and the graphics will be in perfect sync.
Steinberg has to fix this, from the time I start using the orbit 5750, my mixes improve like 10 times.