One thing I dislike that seems to happen a lot in this forum. Definitely a few times in this thread.
Is users telling us what most blah blah use today. In this example I know many professionals that don’t. But I’m sure many do. Like everyone else here I don’t know.
Of course it is - e.g. look at Bitwig.
There’s going to be two different ways to do PDC
- Always - which is what Cubase does, hence the gap
- Only when no playback active (no gap, no live PDC)
Maybe there’s some way this can change - like users being given an additional buffer setting they set up to a certain amount like 300ms to that plugins operating within that delay period can be added without having to interrupt PDC causing an audio gap… but…
As it stands, I’m not sure any DAW has this. Last I checked, and what people don’t realize, is that other DAWs don’t do PDC during live playback, only after playback has stopped. Meaning things will be out of sync until you stop playback.
What is even the real world application of this? How important is it to people to be able to live-add a distortion plugin without a short interruption? In a linear-compositional program that isn’t meant/intended/designed for live use?
Considering there has been complaints on how Cubase doesn’t seem to know how to handle high core count/threads processors, that will have to happen anyways because CPU development will not stop just because Steinberg decided their audio engine only works correctly if you have 4 cores/threads.
Gapless audio engine and CPU core handling are two completely different issues. Proper CPU handling is more important than gapless audio.
For me, changing Cubase to be a live performance tool is not worth the audio engine rewriting and risks that it may bring (new bugs, issues, etc.).
My point is the audio engine will have to be redone at some stage because of those issues, not that these are the same issue.
I myself come down on the side of wanting Cubase to keep its philosophical gestalt. Not so much because I might be unable to ignore a new feature I don’t need – but as others have said – the design and development team balance the effort made, based on criteria more enduring than user requests.
There was a period when I checked out other daws, and sometimes would fall in love with some feature that one had. But later I would miss something in Cubase, that even though it wasn’t perfect (e.g., PLE and LE for me) it was still a better all-rounder tool than other daws.
And what is it, exactly?
No one asks for Cubase to be turned into Ableton Live or Bitwig, but it is certain that some features they have - clip launchers, modulation, patterns - are important to how people want to do do create music nowadays.
Has adding patterns and modulators changed Cubase’s “philosophical gestalt”? If you don’t want to use them, you may pretend they aren’t there and never see them!
Same with clip launcher. It’s super useful for trying out combinations and sequences of ideas without committing to them on a timeline - and no, it’s not the same as arranger track/feature. It’s much, MUCH easier, especially with a pad controller like Launchpad.
Also, gapless audio is how people like to work now - music is still playing (often in the loop, or from clip launcher) in the background while you add tracks, audio, MIDI, FX and instruments. It’s just another way of producing, arranging, sound design… It has nothing to do with live performance, and thus it doesn’t have to threaten your “philosophical gestalt” of Cubase being somehow above it all, as if it’s an inferior form of art.
Actually, the suggestion of both Gapless and Clip Launching, in combination, is asking for Cubase to be turned into Ableton/Bitwig. You/others, are asking Cubase to become a looper based Live Performance piece of software - that is simply not what Cubase is designed to be. It would take SIGNIFICANT changes from the audio-engine up to GUI
Cubase is intended to be a linear composition/production/audio engineer tool. It focuses on uniform studio sync - not desynchronized multi-track playback. That’s just not what Cubase does. At all.
Actually these entities already existed in the form of MIDI plugins of which I have been using for over a decade.
The people on the forum asking for this I don’t think really understand what they are asking for, and what it is that would be changing… Cubase syncs PDC/Latency during playback and other DAWs don’t (as far as I know…) It’s not a bug, it’s a feature.
Cubase is a very sync-focused program (PDC, Automation, internal MIDI, External MIDI Timecode, External FX, etc)
And perhaps, if you are getting lots of clients for which this is a production workflow problem for them - ie, they write/produce in Ableton/Bitwig, then you should probably learn the software your clients use, and then Edit/Mix in Cubase.
But the reality is, as far as I know (i don’t continuously research this) is other DAWs only PDC/Latency sync on playback start - not during. Cubase does things differently in which it always does PDC/Latency sync…. and some of us prefer this….
I much prefer having a 1-second interruption, than to have one track out of sync with the rest which would require me to stop playback and start playback again anyways to have an accurate analysis of it… Having one track out of sync is going to mislead your perception of its timing/sound in relation to all the other tracks… ie, if you shift a track by 15ms, it’s frequency-time domain has shifted thus changing various psychoacoustic phenomena (such as masking) in relation to other tracks…
Now, when someone is on acid, live jamming loops in Ableton and everything is “hitting the red”…. 15ms of transient displacement because they haven’t hit stop and play again to reset sync…that’s probably not going to matter to them in that moment…
If you go research “latency issues” or “delay compensation issue” for Ableton and Bitwig, you will find all sorts of problems people are having………… Because you can’t add a 300ms latent plugin in 0ms of time…………
Except Cubase can already do it and do it in multiple different ways as well. It has loop recording, pattern tracks, you can load samples and loops on Sampler Track and Groove Agent, the Arranger Track offers an alternative way of looping as well… It just doesn’t have an explicit clip launcher.
Cubase might have been designed to be that 34 years ago, but claiming it to be linear is funny considering how a number of composers don’t start their music from the very beginning. If Cubase was linear, you couldn’t start recordings on bar 16 without filing the last 15 first for example.
Every DAW has some form of clock sync. You couldn’t even have latency compensation without it. How would you even measure latency without a timing reference? You can’t even derive sample rates without it.
I’m confused then, and you are correct - why the need of Steinberg doing anything to do with this if we already have what we need. But also no, you’re sort of leaving out a key aspect - and that is looper based Live Performance piece of software. Yes, Cubase obviously can do some stuff live/”real-time”.
No, you’re mixing up context and leaving out specific things I’ve stated like ‘not desynchronized multi-track playback’. Linear in this case simply means all playback occurs at the same time for all tracks.
Your “definition”, it would be like saying multi-track tape recorder isn’t linear because you can punch in…
I’m not sure what you’re disagreeing with in my quoted text, or what you’re asking me? The reason Cubase causes an audio playback gap - is because it does the sync during playback. Other DAWs don’t, which causes things to fall out of time until the user stops playback and starts again.
Bitwig and Ableton have had all sorts of sync issues, Ableton finally just solved some of them last year.
Some people
. I don’t care about “gapless audio”.
Excellent ! ![]()
Thanks for your post.
I wasn’t aware how deep Cubase’s PDC/Latency compensation is going (if what you’re describing is true & I’ve no reason to believe otherwise).
Regardless, I don’t really care about “gapless audio” to be super spot-on, perfect and never produce any glitch. Perhaps some kind of gradation would be enough, so that it for example disables / minimizes ASIO Guard, etc. so Cubase doesn’t stutter whenever I move a clip, add a track or change the order of FX? I don’t mind small glitches, but half a second one does throw me off, especially if during that time things play out of sync.
I notice this become much less of a problem now with Apple Silicon CPUs, so perhaps it will get easier to solve with computer power.
- this is true about Live, but not so much about Bitwig which is considered much, much better in this regard (to the point many artists have switched because of it),
- no one adds a plugin that has 300ms+ of latency to a project when playing live - you’ll only just play clips “out of order”, or loop them “unexpectedly”; whereas during production, they’re aware e.g. Ozone will cause this. It’s about different things, like moving tracks or clips, FX order, etc. where Cubase runs everything that’s not armed in secondary, much bigger buffer (kind of pre-rendering) thus allowing for more plugins, but having to be re-calculated in real-time when something does change on these tracks after all.
This wasn’t really called for, although it is emblematic of how a lot of people here think about others using more preformance-oriented DAWs, like Live or Bitwig.
We get it - you’re all Chris Lord-Alge mixed with Hanz Zimmer. Carry on!
![]()
What makes you think Chris Lord-Alge and Hans Zimmer haven’t done acid? ![]()
There’s a lot going on under the hood of Cubendo that allows it to operate the way it does, in which many people have become accustomed to not realizing they actually depend on it to work that way, including the adding of tracks - that is a computer process, and that track(s) is being added into a bigger operational picture - of which Cubase wants to keep sample/latency/clock accurate across multiple variables… like the ones I listed, or others such as video sync - something Live and Bitwig aren’t really focused on.
The key to using Cubase in a seamless/live way, is to really hunker down with a journal and pen - and plan out some clever templates in which you have all your tracks and routings to groups and FX already pre-configured, with chosen plugin chains, etc. For example, in all my templates, as a standard - I have a dedicated recording tracks folder, with audio tracks all connected ready to go, that I use just for recording… so that I don’t have to add audio tracks for recording… if for example, I am working on a production and the singer randomly starts singing something - there’s already an audio track ready to start recording.
A bit of pre-production session development and planning goes a long way - figuring out which plugins are likely suitable for a client/project type before they are in the door, having all that routed and ready…
Very little gaps or process interruptions over on my end.. mind you, I’m not a plugin hoarder who is flicking through 1000 plugins and picking a different flavour every other minute… So it’s not difficult for me to choose which plugins are going to be pre-added in my templates.
Again… What people, all people, most people, some people, a few people?
Well, than fix both issues. What’s all the fuss around it? Just fix it and make the customers happy.
It is possible in principle, so it can be done in a programming language. That’s not rocket science.
Yes it is possible in priciple. But you have to giveup delay-compensation. Cubase is a music production software not a live performance tool. In principle they can change cubase to be a word-processor too, so what is your point?
People. Those that don’t use it, I don’t consider “people”
![]()
Yes - “some”, probably “many” even considering how popular Ableton Live (+Bitwig) is.
No, delay compensation does not have to be sacrificed for gapless audio.
It’s just a matter of the audio engine to handle this.
It’s not the customers fault, that the audio engine of Cubase might be decades old.
The problem of delay compensation and gapless audio is known for years and they have not done anything about fixing this.
Now, the time has come to do it or to wither away from the market in some years. It needs to be fixed asap and by all means necessary!
It might be necessary to use several datastreams in parallel to get it done. But this should be of no concern, considering the power of modern computers.
As a customer, I do not care about how they do it, but when and finally how good they are doing it.
Just fix it! No matter what!