I’m probably misunderstanding something here… (you can tell I don’t use SLP all that much at all.!)
As an experiment, I started a new Cubase project, set its length to be approx 2.5 hours long (am often working with live concert multitrack audio) and imported a 4min solo vocal take from a song that was ‘rendered-in-place’, placing it at a position about 80mins in from the start of the timeline (see pic). Its the only audio event in this project.
I apply SLP as an ARA extension (on that selected event). Two questions:-
- why has SLP processed all the empty space from the beginning of the project, up to and including the actual audio event itself (see red outline in pic).? My expectation is that it would only process the event length and show that in its display.
- why would it do that, yet not continue on and process nothing after the end of the audio event (up to the end of the project length).?
Am asking this because of course I first tried the process in my main concert project, wanting to work on the vocal of a song well into the second set of the concert, directly there. I’d set the Locators around my chosen song and split the file, did the render-in-place, added SLP as ARA extension to the (selected) event and noticed SLP always seemed to process the whole timeline up to and including the actual audio I wanted. Its what I would expect if I hadn’t split around the song I wanted and done a ‘render-in-place’ first (making a unique audio file of 4mins or so).
And thus I’m here, with my experiment in a brand new, clean project and only importing the rendered 4min file. Yet ,results still the same…
So, is this to be expected.? Am I hitting any unnecessary overhead/processing.?
Visual to help - note the actual audio event starts just after (to the right) of where I’ve made the red rectangle. This corresponds to where the event is on the timeline in the upper view
I can continue working ok, but am wondering if I’m missing a trick… or worrying too much/over thinking things.! Thanks for any advice or tips…