Bounce multiple events with Tail automatically

hey all!
long time Logic user transitioning to Cubase and so far I haven’t found a way to do this specific task:

I work in game audio and I handle hundreds of SFX regions and 99% of the time the actual region length is way shorter than the actual audio file as any fx such reverb or delay, will generate a tail.

In Logic is super easy to do this: you simply need to check the “Add tail” box in the bounce window and it’ll stop the bounce as soon as the amplitude reaches -infdB.

So far in Cubase I’ve only found a way to manually specify the tail but that’s not an option since, when you have many different files, the tail will always have a slightly different lengths, so I might as well do it manually by specifying the range. But, as mentioned, with hundreds of events to bounce it’s a no go.

I also tried all the other options (Channel Settings, Complete Signal Path etc…) but the resulting file had the same length as the event, while the reverb tail was much longer (hence the reverb tail got cut)

Thank You!

1 Like

Bad news: Cubase does not have such a functionality.

Good news: I think this is a excellent feature-request. This would be helpful to anybody that renders audio in place, not just sound designers.

1 Like

confirm I just found out it doesn’t :smiling_face_with_tear:
I’m amazed by the fact that, after all these years, only Logic has this simple but crucial feature (and I’d so love to ditch Logic since it became a bug festival)

My programmer’s mindset tells me that coding the entire “set your own tail” functionality is a waste of time since, if I don’t know the tail, I have to sit and listen to when the tail ends anyway and might as well specify the range manually. Same result.

Also I don’t see any use in the Complete Signal Path function since, if you have a reverb-delay on, the generated file will always have the reverb-delay tail cut abruptly which makes the file unusable.

I’ll def post it as a feature request.

Tho my biggest question is: how sound designers tackle rendering hundreds of sfxs in one go without this little function :sweat_smile:

Thanks for replying Johnny!

I’m wondering if Nuendo does it with all its game audio functionalities?

I’m thinking out loud here since I’ve never had to do something like this. But could a possible workaround be that you set the tail length to accommodate the longest tail and then batch process the rendered files to remove the silence at the ends? Perhaps Detect Silence can be used?

1 Like

Thanks mlindeb!

maybe! could be

I’ll have to learn what you can do in the Batch window because one thing is 100% sure: thousands of sound designers use Cubase and 100% they’re managing, rendering thousands of files regularly.
I’m happy to adapt my workflow. Really like a lot of Cubase features and would love to master it longterm.

1 Like

I tend to use FFmpeg for any mass operations in audio (as well as video).

It’s a bit of a beast, but it’s totally worth the initial learning curve for anyone who wants to mass manipulate audio files.

So as @mlib suggested, I would also create Max length tails in Cubase. And then I’d use FFmpeg on the resulting folder to wipe out the silent tails across all of those files with one command.


side note: FFmpeg is used under the hood by many audio and video editing apps including SpectraLayers.

Many Thanks Nico :slight_smile:
I’ll def push this as a feature request.
Coding wise should be trivial since, if a new “add tail” checkbox is exposed, its behavior should simply be:
stop any rendering process for any given audio (event or track, audio or midi) as soon as -inf dB is reached.
Bingo.

1 Like

A wise human once said: Never tell a developer that a task is trivial.
Here is an example, where your trivial method would not work - put a long delay on short sounds.


Upper track is original, lower track is rendered with the delay applied.

1 Like

:rofl:
I see your point but I’ve been designing way more intricate game audio systems and worked close hands with audio programmers for over a decade so I still sense it should be trivial since 3/4 of the architecture is already in place with what we already have and it’ll be a matter of

  • adding another variable next to the ms/bars etc… in the Tail section (dB)
  • specify that the event range should be ignored
  • specify that now the render range ends only when that value -inf dB has been reached
  • expose the checkbox for us in the GUI

That’s obvs the backbone and it all depends on how the Export/Render architecture is designed. But these are the basic instructions. After all is a simple feature.

BTW!
what I see in the picture is exactly what I’m after!! O_o
How did you do it?

-inf dB isn’t enough as a criterion to stop the render, as there are many plugins that never reach that value (e.g. those with “analog noise “ for example, say tape simulations)
How does Logic deal with that problem? Imho you need some kind of time limit, even if it is very large, like the defined length of the project.

From a user/usability standpoint, it would be better to not worry about specifying the “correct” length, of course.

Curiously Cubasis (Android / OS app) can do this by default.

1 Like

Good catch! You’re absolutely right with the “analog noise” plugins and that’s what happens with Logic’s “add tail” function which is the only pitfall.
But, having used this function since 2012, it happened something like 1% of the time among literally thousands & thousands of audio clips I’ve rendered which proves to be, for sound designers, extremely helpful. And I point out “for sound designers” and not a generic “user”.
I also never used such function with standard musical duties.
From a sound designer pov, specifying the longest length means that you have to edit all those unnecessary silence from potentially even hundreds of clips and each clip will have different content that will generate slightly different tail lengths.
You could eventually use the detect silence using the Batch functionalities or even create a macro but again, in Logic is one single click and you’re done :blush:
A common case is: you have 140sfxs and you need to have a special variation for each depending on the game state:

  • dry
  • dry + reverb for when you perform the same action but with a bigger reward
  • dry + delay with one tap when you miss that action
  • dry + delay with 4 taps + reverb when you go perfect

You can imagine the workload and how long it’ll take to carefully listen to each unique tail length & set the updated range VS select all events, 1 click and you’re done.

1 Like

I agree that if you know what you are doing, it shouldn’t be much of a problem, but of course if you design a program, you cannot expect everyone to know what they’re doing. I think the best solution would be a two condition configuration, like “stop at -inf or after x seconds, whichever comes first”.

Can you do me the favor and try my above example with short sound and the 1 bar delay in Logic and let us know where Logic would end this rendered file? Maybe a screenshot like mine? I’d be interested.

From a design pov what you’re suggesting is the most comprehensive one.

  • stop at -infdB: makes both Bars, Seconds a redundancy as it’ll cover whatever length.
  • stop after x seconds: could be useful in those rare cases with plugins that bring self noise.
    But again, in my real life experience it’s a very rare case and it’s a bit of an overkill.
    I’ve been rendering what, in the end, resulted in more than 7000 musical clips for a game where we had almost 30GB of procedural audio and as you can imagine for the variety of content that more than 7000 musical clips must require, I’ve used a plethora of different processings but I can safely say that it happened on roughly 50-70 clips.
    But all in all I’ve found that using self noise plugs for short clips doesn’t sound great because, in our case, if the engine would trigger multiple of them, the noise would stack up awfully quick :rofl:

Your input made me think that could be useful to leverage the part of the architecture of Cubase that is aware of the channel strip content (I think it’s called Dry-Channel Strip or something) and use it to expose all the plugins in a given channel strip (this is also already available somewhere else in CB, can’t remember where) and be able to bypass which ever is the noisy one before rendering.
But my experience tells me that this could easily be an overkill too. I sense that manually re-adjusting the length of those few problematic cases will take way less time than coding but most of all debugging and polishing.


this is what happens in Logic.
But you mentioned " a bar long delay" and I suspect it’s the reason why yours looked just like this.
But if it’s not bar based like this one it wouldn’t work since the dry sound I played is not in grid, nor the delay is set to a measurable length.
(never used a bar sync’ed delay in my 25 years of music production now that I think of it lol)

The longer this thread goes, the happier I am with my FFmpeg suggestion :crazy_face:

1 Like

:rofl:

I’m curious to know more about its workflow. Can you share a real-life list of each step to task completed?

This is the scenario:
you have 140sfxs and you need to have a special variation for each depending on the game state:

  • dry
  • dry + reverb
  • dry + delay with one tap
  • dry + delay with 4 taps + reverb

Thanks!!

I would put the audio files in a common folder, and then use FFmpeg at the command line to process all of the files in that folder with a single command.

And since I’m not doing it regularly, I always google for the right FFmpeg parameters to use.

For example, I might start with a query like

https://www.google.com/search?hl=en&q=ffmpeg%20trim%20audio%20silence

and then work my way through the more precise options I’m after.

In your case, it strikes me as you want to remove silence only at the very end of each file. So that would seem reasonably straightforward via a single FFmpeg command iterating over all files.

FFmpeg takes some learning to get into, but the lifelong payoff is really big, if you’re working with audio in a massive way.

There are also GUI frontends for FFmpeg, but I’ve never bothered researching that.

1 Like