What’s everyone think is coming to SpectraLayers 10?
Ask not if you are ready for SpectraLayers
Ask if SpectraLayers is ready for you
One thing I would like to see is the motion graphics animations for selections to become more fluid and lively-like. Right now it seems like the animation motion graphics for selections (the animation around selections) is moving at a jittery 10 secs per frame (which makes it looks outdated). I would like to see different options for selections animations (sort of like the idea of different color options in the menu) and the animation to at least be 60 seconds per frame (120 would be nice).
I also noticed that playhead is also a bit jittery/sluggish/laggy(not fluid at all). Things like playhead improving its fluidity is something I’m looking forward to.
I would like to see SpectraLayers to be more synth like, more as a sound design tool. Also to be more oriented for composition where a sample can be transformed to a new song.
I would like to see more tuning correction tools
Sharks with laser beams!
Good idea but in order to kill 2 birds with one stone and solve many problems at once, it’s better to implement real-time transformation and then you can add a set of tools within that (like free drawing pencil, vibrato, etc). Then adding a piano roll to the menu(the right click menu) would simplify this process so much better.
To be honest the only way I could see something like this being implemented is in real-time. It would be fantastic to freely (free roam) move any tonal element or partial around in real-time but I would imagine that being extremely cpu/gpu intensive.
There is another tool that can sort of do this but its a “hit n miss” because when you move elements around (depending on the material) some serious phasing issues occur and throws the phase off balance.
I was concerned with the editing of live recordings. There you have little influence on the pitch of the instruments or singers. Since SL can split the whole spectrum into its components, it would be handy to be able to make corrections then.
Or, for example, a choir singing without instrument accompaniment often loses the correct pitch and gets lower and lower.
Brass bands also often have intonation problems that only become apparent on the recording.
Here a pitch line over the whole recording would be interesting, this should represent the average pitch as a curve over the whole piece. I know, with Cubase or Melodyne are similar things possible but in SL it would make sense because the instruments can be present anyway already separately
Right! I agree.
However (like I said), it’s better for the developers/developer to use whats there already and build upon it(transformation tool) rather than implementing a whole new tool specifically for pitch correction. The best way to implement this idea is to build upon the transformation tool and implement it to real-time transformation. That way it saves the developers/developer time where resources are not being wasted investing into R&D for pitch correction research, also it is more intuitive to just select any element/partial and move it around (to the end user).
Something like real-time transformation would (I presume) take some serious engineering to implement. I remember watching a tutorial on “Steve”(the developer of “Serum”) giving a presentation into how he developed “Serum” and he had to hire a mathematician to do real-time morphing (especially on a dsp level) because it’s extremely cpu intensive to do.. So I would imagine something similar would have to be the done in order for real-time transformation to be implemented.
Uhmm In global terms, it seems I disagree with Unmixing here.
For me the real unique power of Spectralayers resides in its ability to carefully exert off realtime thought-out modifications to audio.
In fact, I did welcome those fast and “realtime” changes it had in V8 and V9, because these were intended to improve the user workflow, for instance to ease the instant comparison of processing alternatives, but were far from locating Spectralayers into jamming, direct recording processing or similar fast track environments. Precisely because SL allows for the better evaluation of what we are doing.
In scenarios where I do have to opt between an slow workflow (versus fast) to obtain an extraction detail in better quality, or when I do have to apply an effect, there is no doubt I choose slow, linear and higher resolution, every time!
In perspective, Spectralayers has made surgical substract and precision in identification of extraction audio possible, which was almost unthinkable a few years ago. Mind that all this in a world where for its complete recording existence, almost 99% of achievements had been ONLY additive (Fx, Tracks, midi, vsts, most everything!).
///If it matters for something, I vote to continue this path, instead to divert Spectralayers into performance, batch speed, simplicity or one-knob do-it-all generic tools.
I also think that my priority is more on accuracy. So less artifacts when decomposing a recording, to actually apply the corrections only to the parts of the recording that you have extracted. Of course, this desire still says nothing about the speed of performance.
Better separation algos and … the white screen that appears during boot up could do with being removed which reared it’s head in version 8 ,apart from that what ever Robin throws in i’m sure will be great
Which is fine, that’s fine to disagree, however you have to keep in mind that you’re not the only one using Spectralayers and many other people also use Spectralayers (if not hundreds then maybe thousands). With that being said, I’m sure many people would agree that it is better to kill 2 birds with one stone rather than chasing a pointless endeavor.
You have to think (not just from your perspective but) from everyone else’s perspective and put yourself in their shoes (the developers, the other end users).
Ask yourself this question, would it make sense for the developer/developers to invest in implementing a whole new pitch correction tool (keep in mind that there’s scaling involved, vibrato, tremelo, harmonics, which all have to be implemented correctly) or would it make sense to use what’s already available and improve upon it? If there’s already a “cursor crosshairs” and “cursor coordinates” option in the right click menu (when you right click on the spectrograph/spectrogram), wouldn’t it make more sense to add a “piano roll” or “keyboard tracking” scale to the right click menu rather than implementing a whole new “pitch correction” feature?
Like I said, doing something like real-time transformation may be cpu intensive but I believe good high quality results can be achieved with the right implementation.
Just the other day I saw THIS (tap here) trailer and was surprised to learn that all of this was rendered in real-time within Unreal Engine 5. The fact that something like THIS (Tap here if on mobile or Click here if on pc) is rendered in real-time (in a photorealism way) demonstrates that real-time transformation could also be done in real-time. Steinberg could easily invest resources into making something like real-time transformation a reality and they could keep the quality high. It’s all about reaources and what Steinberg chooses to invest in.
I hope an option to able or disable automatic amplitude crossfade between overlapped layers (linear, log, exp).
I dont understand what you mean by this. Can you please go more into details and explain further what you mean by this?
Maybe with mockup pictures. So I can understand.