Cubase....The Inspector...we need to talk

Yes you can quantize the regions themselves in Logic

Sorry, my post was not very clear. The more important question was: Where would one find that?
I try to learn about the user interaction architecture of that part of Logic.

It’s the same thing. When you click on a (MIDI) region you are selecting the events (MIDI notes) automatically. SO when you go to the inspector window and set a quantize, velocity, gain, etc. parameter it will affect all the (MIDI notes) events in the region. You can often change values by dragging the mouse up and down over a numeric value displayed for a parameter in the inspector. Often no extra mouse clicks after you select a region to see or change the parameter you want in the inspector. The state of all parameters in every region are saved and DISPLAYED RETROATIVELY, i.e., click on any region you previously worked on and see what you last set there. Cubase cannot do this for MIDI quantization. This is a huge timesaver if you have 100+ MIDI tracks with 1000+ regions. Same inspector model for a region with audio. Or the inspector is context sensitive and changes what it displays based upon the type of object you click on in the arrange window. There are other inspectors besides the region inspector. This video might make it more clear,
Logic Pro Complete Tutorial - 08 Region Inspector - YouTube
or maybe this one
Region Inspector Float in Logic Pro X - YouTube
I want to be able to recall all the work I did to 100’s of MIDI regions in an orchestral score and Cubase not recalling Quantize etc. info is an obstacle and time sink.

1 Like

I understand that Logic uses a model where quantisation is a status of a region. That status has a set of parameters attached to it.
Furthermore Logic has another layer of quantisation which can work on an event level, inside e.g. the key editor.

What I dont know yet is if there can be conflicts if you give different instructions to the two layers. E.g. you quantize two (out of several note events) in a MIDI region to a 1/32T grid. Afterwards you apply a 1/2 quantisation to that region. How does that affect the two note events?

And where could I find the quantisations of regions in the project?

The Inspector panels for Inserts and Sends should operate on a “Used Inserts/Sends + 1 blank” basis. I want to keep them open, but just one of these panels uses more than a third of the vertical real estate - it’s just not efficient and restricts the usefulness of the Inspector.

Indeed : check the post I made here about it… It’s on Steinberg “to do” list, but I remember that @Matthias_Quellmann stated somewhere in the forum that the inspector tabs behavior will NOT be improved in C12…

1 Like

learning the hotkeys and binding them so that you only need your left hand to trigger a shortcut makes things pretty easy, for example, Ctrl+Q, open/closes the fader module for me.

although Instrument Tracks are a bit annoying and convoluted.

What I dont know yet is if there can be conflicts if you give different instructions to the
two layers. E.g. you quantize two (out of several note events) in a MIDI region to a 1/32T
grid. Afterwards you apply a 1/2 quantisation to that region. How does that affect the
two note events?

Whichever of the 2 levels of quantization you apply last wins or is in effect. But the region (or main) inspector has GLOBAL scope in the region and overwrites all events of that type. That will override any LOCAL changes you did in the piano roll (or in Cubase = key editor). Say you start out with 1/16 triplet quantization via the region inspector . Then you go into the piano roll, select 3 events and set them to 1/32 + swing value. Now only those 3 events differ from the GLOBAL 1/6 triplet Q. Then you go back in the region inspector and re-select the same 1/16 triplet (or pick a different) setting. That GLOBAL change has undone all previous event quantization (no matter the value) and reset everything to what you just selected in the region inspector. The 3 events you edited to 1/32 + swing in the piano roll were set to 1/16 triplet (again).

2 Likes

Thank you for taking your time to try to explain to me how quantisation works in Logic.
As I mentioned before I am looking at the architectural design of that functionality in order to be able to judge whether the UI design is good and should act as something to be copied into Cubase.

I watched several videos regarding this topic as well as reading the article that was recommended. Unfortunately none of that material covers every single usecase. I am still left with some uncertainties, I would probably have to install Logic myself. But… no Mac, so no go.

I came to the conclusion that I do like some of the ways Logic handles quantisation and Cubase could learn from that, but they have nothing to do with the actual UI design so I won’t elaborate here.
I also came to the conclusion that I think Logics Region Quantize are in the Inspector has a major flaw and should therefore not be copied 1:1 by Cubase: If the user selects a region the Region Quantize may display values which does not conform to how the (note) events inside that region actually are quantized. Unlike Cubase Logic can lead the user to think, however, what is displayed in the Inspector is actually what happens in or inside the Region.
Cubase’s Info Line (which is the horizontal pendant to the vertical Region area in Logic’s Inspector) shows the current actual status of the selected part or event. If you’d introduce the Quantize info (as per your mock-up above) you would break this design rule of Cubase.

Now, I do like the idea of having a editable quantisation data set attached to audio/MIDI parts or audio events. And yes, that is what actually happens in Logic. Not the layer system of Photoshop that you described earlier, I’m afraid, as there is no hierarchy between region quantize and key editor quantize. I just think it would need to be made accessible in Cubase in a different way.

I hope somebody from the Steinberg dev team takes a look at this part of our thread in order to think about advancing quantisation in Cubase to the next level.

1 Like

Thanks everybody!

I’m gonna take a second dive into Cubase. For some strange reason…and I know all the null tests and everything…it DOES sound “better” than Logic. I’m not even kidding!

The workflow, for song writing…is pretty steep. Very powerful…but steep. I feel like Cubase can do “MORE” if you really master it.

Amadeus e.d.p

I’m totally in agreement.

Amadeus e.d.p.

Cubase definitely had one of the best/transparent/aliasing free render algorithms years ago. Protools or Premiere sucked compared to it.
Nowadays there shouldn’t be any big issues left though. So I guess its mostly the workflow :thinking:

Tj99,

So I kinda created a new workstation using Halion that is doing some super CRAZY stuff. Like CPU melting DSP beds of goodness. It’s called SkyFall. I’m using a new way to design sounds that are using up to 20 layers of synthesis held together by surround panning, lots of DSP sound shaping and the like…

What I’m getting at is that playing like a normal wave file on a single track doesn’t take up all the “bits” on that channel in theory. You are panned to a single point and all those other points in the sonic spectrum are essentially empty. You can “kinda” fill it out with chorus and reverb and the like…but if you can use something REAL TIME that is hammering a DAW channel in ways that I think NOBODY has really done…you start to hear differences.

So essentially this…I think it’s more about when you start pushing those channels PAST what would normally be done…you start to hear differences. Like either the DAW or Halion starts making desicisions about what frequencies to keep and which not to play. I think THIS is where Cubase sounds different. I think when pushed…it seems to do a slightly better job about those choices.

That was a hot pile of mumbo-jumbo and apologies to all. But I’m not kidding. In extreme situations…the engines act differently.

Amadeus e.d.p.

I am afraid this is not how it works in digital workstations…

There’s virtually no limit with 32 or 64 bit float. The processing is linear, meaning it will sound the same whether your audio is at -25 dB or +60 dB above 0.
The only moment you would notice this is when using analog modeled plugins. These indeed react to the incoming level and as a consequence will sound a bit different. Also most analog modeled plugins do clip at 0 dB so they will saturate or distort harder when the signal exceed this ceiling.

Hey Louis…

Let me explain it like this. Let’s lay you have 150 voices of polyphony coming from just 4 notes. That sound is made up of 10 individual stereo layers of synthesis. Let’s say you have +5db of120HZ bass on EACH of those layers. Having that much low end causes the Halion Engine (at least to my ears) to suppress OTHER frequencies.

Now let’s take some of tracks to -5DB at the same 120HZ…Now other sonic information that the engine saw as either unable to be heard or just didn’t render at all…they become audible.

So it’s in making THOSE types of decisions that I feel like Cubase does differently. Pushing things to extremes and how the engines chooses to drop or suppress various frequencies. Cubase just retains high end information in a different way to my ears.

Thanks!
Amadeus e.d.p.

It’s just summing the layers, surely? I wouldn’t expect there to be any choice on what to drop or not as it shouldn’t be necessary. If a ceiling gets hit then it’s just basic math and would clip at the max value I presume?

Are you coming to this conclusion through listening? If so discrepancies will be heard in your headphones or speakers due to the physical ability to play high frequencies vs the lower.

Also, what your ears get used to can massively effect the perceived sound, This video is quite an eye opener on how easy that happens:

1 Like

This is not how it works in the digital realm. This sounds more like your monitoring setup or the plugin itself is doing this rather than the DAW.

Be assured, there is virtually no difference in sound quality between DAWs when it comes to digital summing, unless there is any non linearity added on purpose (like e.g. when using mixbus32c).

I guess we’re gonna have to disagree here T! That’s cool. But I’m telling you right now that when I load up SkyFall (My Halion based synth)……that SOMETHING is happening when I’m pushing the channel fader to +10db or higher due to the Warp Architecture . Something somewhere is different. Not night and day. Just “different”.

Thanks for your feedback though. I love this sound design stuff!

Amadeus e.d.p.

Here’s a few reasons why it may sound different when changing the gain :
(just my personal thoughts)

Click to expand
  1. If you have too much bass it will indeed absorb higher frequencies.
    I mean this does not happen inside the DAW, it’s a physical phenomenon that occurs with the speakers. When the speakers have to make large and slow movements (i.e. reproducing low frequencies), it will be harder for them to reproduce higher frequencies simultaneously. This phenomenon is accentuated as you increase the volume, because the coil has to push the diaphragm with more force through greater distances. Highest frequencies disappear faster, that’s why studio monitors have a tweeter in addition to the woofer in order to mitigate this effect, but this affects the woofer too.

  2. If the instruments have synthesis somewhere, the signal will not always start at the same position in the waveform, it can start anywhere each time a note is played or even have analog drift. In addition, this also includes effects such as reverb, delay, chorus, distortion etc, they can have modulation in them which makes them sound random for each layer / track.
    Because of this, when you change the gain structure of the layers or tracks, phase cancellation will most probably occur at different places, and as you said it’s not night and day, it just feels different.

  3. As you increase the track volume inside Cubase, it could be that the signal is clipping. Not necessarily at the channels stage, but it could be clipping at the Control Room stage without you noticing. But I guess in that case you would hear a notable difference.


What do you call “Warp Architecture”?

Also one thing you can try is doing a render of a same sequence, one with +5 dB at 120 Hz and one with -5 dB, then use the Spectrum Analyzer on each resulting file (it’s in the Audio menu). Now compare the high frequency content, if you don’t have notable differences then it means it comes from your monitoring device and/or ears (ears also cut the highs when they receive too much bass).