Change hitpoint threshold of multiple events

Hello there,
I would like to know if there is any chance to change the hitpoint threshold of multiple events.
When recording drums, I usually edit on the go, so there are already a lot of audio slices after the recording. These are “fast edits” (not the cleanest) so it does not slow down the recording process. I need to edit them afterwards manually or with quantization anyway again, so there is no point to bounce them.

Now when I select all my snare or kick events and double click them, and edit the hitpoint threshold, it only affects the first selected event. So I would have to select each audio slice one by one to change the hitpoint threshold. That can’t be it. Is there another solution to that?

That would save so much time :cry:

Okay it seems to be broken. It only works for the first 3-4 selected events, the rest is not going to be affected at all.

EDIT: It only works if the events are all from the same take. E.g. if all events are from take Snare_1 you can select them all, hit enter and change the hitpoint threshold for every event.
If you have multiple takes, Snare_1, Snare_2, Snare_3,… and select them all to change the hitpoint threshold, it will only work for Snare_1.

Thats really bad. When recording technical stuff you have to divide it into multiple takes. This is crucial for editing!

Does anybody know a workaround?

It is better to first bounce/consolidate a drum part (either a song part or the entire song itself), before moving on to hitpoint detection and editing. Having too much stuff sliced up will be messy and even your playback won’t be 100% accurate.

I’d recommend bouncing your rough edit first, be sure to have your crossfades tidy and sounding right, then moving on to a deeper editing stage using unsliced events. And always keep an unedited backup you can return to if necessary, just in case.

Well, because the software obviously can’t handle multiple takes. 1000+ slices of the same take don’t make a problem here.

Okay, it will be messy, because Cubase (but also Studio one and Logic) begin to lag with a lot of audio slices, but the playback will still be 100% accurate. Until now I always had projects with sliced drums, and it never sounded off. If they’d manage to fix the performance issue, I’d see no problem here.

Yes that’s really it, I guess… It’s just, when recording I make very fast edits, to not lose time and to not let the drummer wait. So directly after recording some crossfades need to be redone. So I would have to go through all crossfades, then bounce it, and then edit everything again another time. Thats not what I would call an intuitive fast workflow.

And now imagine recording technical stuff, where you already edit a lot during recording. :confused:

Thanks anyway!

I record technical advanced stuff every now and then and that’s how I get things done - I organize tasks, commit to creative decisions as early in the process as I can, keep my project tidy and don’t feel lost or frustrated in the process.

And it doesn’t even take that long to review crossfades and bounce your rough edits. I take just a few minutes per track and I even have a macro that automatically removes overlaps, crossfades, bounces and groups waves together. The next editing step gets easier and those few minutes you spent before easily pay off.

And no, if you’re demanding your processor to calculate thousands of crossfades plus plugin processing and your hardrives to read thousands of mini waves, playback will obviously not be 100% accurate. You may not listen but there will be artifacts. This is not my opinion, it’s a verifiable fact.

Best wishes for your productions!

No offense, the CPU is maybe working harder, but I am pretty sure the outcome of a sliced/crossfaded is the same, as the bounced one. Maybe long times ago this has been an issue, but nowadays thats most probably not an issue anymore or negligible.
Is there some dev talk about that somewhere or can you prove that?

I’m not offended. You can test it yourself, just bounce both and check if they null each other. If they do, awesome, you’ll only have to deal with a laggy interface. But if you have a specially heavy project the audio may not null each other and in that scenario, besides a laggy interface, you’ll also have unreliable audio. The point here is, do you really want to risc it? There are alternative working routines that can spare you of all that. Cheers!

Thanks anyway! I guess there is no other way around that.
Cheers!