What’s everyone think is coming to SpectraLayers 10?
Ask not if you are ready for SpectraLayers
Ask if SpectraLayers is ready for you
One thing I would like to see is the motion graphics animations for selections to become more fluid and lively-like. Right now it seems like the animation motion graphics for selections (the animation around selections) is moving at a jittery 10 secs per frame (which makes it looks outdated). I would like to see different options for selections animations (sort of like the idea of different color options in the menu) and the animation to at least be 60 seconds per frame (120 would be nice).
I also noticed that playhead is also a bit jittery/sluggish/laggy(not fluid at all). Things like playhead improving its fluidity is something I’m looking forward to.
I would like to see SpectraLayers to be more synth like, more as a sound design tool. Also to be more oriented for composition where a sample can be transformed to a new song.
I would like to see more tuning correction tools
Sharks with laser beams!
Good idea but in order to kill 2 birds with one stone and solve many problems at once, it’s better to implement real-time transformation and then you can add a set of tools within that (like free drawing pencil, vibrato, etc). Then adding a piano roll to the menu(the right click menu) would simplify this process so much better.
To be honest the only way I could see something like this being implemented is in real-time. It would be fantastic to freely (free roam) move any tonal element or partial around in real-time but I would imagine that being extremely cpu/gpu intensive.
There is another tool that can sort of do this but its a “hit n miss” because when you move elements around (depending on the material) some serious phasing issues occur and throws the phase off balance.
I was concerned with the editing of live recordings. There you have little influence on the pitch of the instruments or singers. Since SL can split the whole spectrum into its components, it would be handy to be able to make corrections then.
Or, for example, a choir singing without instrument accompaniment often loses the correct pitch and gets lower and lower.
Brass bands also often have intonation problems that only become apparent on the recording.
Here a pitch line over the whole recording would be interesting, this should represent the average pitch as a curve over the whole piece. I know, with Cubase or Melodyne are similar things possible but in SL it would make sense because the instruments can be present anyway already separately
Right! I agree.
However (like I said), it’s better for the developers/developer to use whats there already and build upon it(transformation tool) rather than implementing a whole new tool specifically for pitch correction. The best way to implement this idea is to build upon the transformation tool and implement it to real-time transformation. That way it saves the developers/developer time where resources are not being wasted investing into R&D for pitch correction research, also it is more intuitive to just select any element/partial and move it around (to the end user).
Something like real-time transformation would (I presume) take some serious engineering to implement. I remember watching a tutorial on “Steve”(the developer of “Serum”) giving a presentation into how he developed “Serum” and he had to hire a mathematician to do real-time morphing (especially on a dsp level) because it’s extremely cpu intensive to do.. So I would imagine something similar would have to be the done in order for real-time transformation to be implemented.
Uhmm In global terms, it seems I disagree with Unmixing here.
For me the real unique power of Spectralayers resides in its ability to carefully exert off realtime thought-out modifications to audio.
In fact, I did welcome those fast and “realtime” changes it had in V8 and V9, because these were intended to improve the user workflow, for instance to ease the instant comparison of processing alternatives, but were far from locating Spectralayers into jamming, direct recording processing or similar fast track environments. Precisely because SL allows for the better evaluation of what we are doing.
In scenarios where I do have to opt between an slow workflow (versus fast) to obtain an extraction detail in better quality, or when I do have to apply an effect, there is no doubt I choose slow, linear and higher resolution, every time!
In perspective, Spectralayers has made surgical substract and precision in identification of extraction audio possible, which was almost unthinkable a few years ago. Mind that all this in a world where for its complete recording existence, almost 99% of achievements had been ONLY additive (Fx, Tracks, midi, vsts, most everything!).
///If it matters for something, I vote to continue this path, instead to divert Spectralayers into performance, batch speed, simplicity or one-knob do-it-all generic tools.
I also think that my priority is more on accuracy. So less artifacts when decomposing a recording, to actually apply the corrections only to the parts of the recording that you have extracted. Of course, this desire still says nothing about the speed of performance.
Better separation algos and … the white screen that appears during boot up could do with being removed which reared it’s head in version 8 ,apart from that what ever Robin throws in i’m sure will be great
Which is fine, that’s fine to disagree, however you have to keep in mind that you’re not the only one using Spectralayers and many other people also use Spectralayers (if not hundreds then maybe thousands). With that being said, I’m sure many people would agree that it is better to kill 2 birds with one stone rather than chasing a pointless endeavor.
You have to think (not just from your perspective but) from everyone else’s perspective and put yourself in their shoes (the developers, the other end users).
Ask yourself this question, would it make sense for the developer/developers to invest in implementing a whole new pitch correction tool (keep in mind that there’s scaling involved, vibrato, tremelo, harmonics, which all have to be implemented correctly) or would it make sense to use what’s already available and improve upon it? If there’s already a “cursor crosshairs” and “cursor coordinates” option in the right click menu (when you right click on the spectrograph/spectrogram), wouldn’t it make more sense to add a “piano roll” or “keyboard tracking” scale to the right click menu rather than implementing a whole new “pitch correction” feature?
Like I said, doing something like real-time transformation may be cpu intensive but I believe good high quality results can be achieved with the right implementation.
Just the other day I saw THIS (tap here) trailer and was surprised to learn that all of this was rendered in real-time within Unreal Engine 5. The fact that something like THIS (Tap here if on mobile or Click here if on pc) is rendered in real-time (in a photorealism way) demonstrates that real-time transformation could also be done in real-time. Steinberg could easily invest resources into making something like real-time transformation a reality and they could keep the quality high. It’s all about reaources and what Steinberg chooses to invest in.
I hope an option to able or disable automatic amplitude crossfade between overlapped layers (linear, log, exp).
I dont understand what you mean by this. Can you please go more into details and explain further what you mean by this?
Maybe with mockup pictures. So I can understand.
Unmix menu, including
Unmix Noisy Speech,
Unmix Multiple Voices,
Unmix Multichannel Contents and Unmix Transcription.
Plus in ten days more:
Scheduled for Jun 28, 2023 livestream by Phil Pendlebury Spectralayers 10: Exploring the New Features - YouTube
"In this livestream we explore, with practical workflow examples, some of the exciting new features of Spectralayers 10. Discover the game-changing Unmix menu, including
Unmix Song, Unmix Drums, Unmix Noisy Speech, Unmix Multiple Voices, Unmix Components, Unmix Levels, Unmix Multichannel Contents, and Unmix Transcription. Experience enhanced processes like Amplitude Normalize, Reverb Match, Reverb Reduction, Clip Repair, and Voice Denoiser.
We’ll also highlight workflow improvements, such as editing enhancements, better layer colors, faster AI and spectral engines, multiple processes support, streamlined file import, contextual help, improved value sliders, and new keyboard shortcuts. Don’t miss this chance to unlock Spectralayers 10’s audio editing potential! Thanks to my chat moderator, Terry. You are invited to my Discord: [Phil Pendlebury]
More thanks: Greg Ondo, Fredo Gevaert, Sagi Gal, Nicholas, Brock, Tee, Helge Tjelta, Pål Svennevig, Brendan Woithe, Erik Guldager, Adam Vanryne, Takashi Watanabe, Simon Milward, Steven Ghouti, Jonathan Campbell, Maziar Golgolab, Alberto_Pedraza, Oleg, Sean Thompson, Tee Bale, Terry Scarth, Kevin Stanley"
Well obviously I gotta learn to check in here once every six months.
I welcome any improvements to Unmix. But there is one thing I’ve wanted in Wavelab, which will likely never happen, but which perhaps could happen in SL. And that his…
I’d like a way to easily target peaks BY INSTRUMENT (or at least FREQUENCY).
I’ll frequently find half a dozen ginormous peaks in a song–they don’t ‘sound’ all that much ‘louder’ but clearly they’re creating just enough havoc to prevent nice, easy mastering. It would be great if there was an easy way to target them (perhaps by colour) and say, “what’s causing this, is that one ride cymbal”, “what’s causing -that-, is a bass guitar note”
To my mind, this is something ‘AI’ should be able to identify more easily.
I’d like to see:
More editable management of the load/save selection area. ie merge selections down into one rather than load, move, save, reload to build up selections.
I’d like to see DemucsV4 included as the AI separation, Spectralayers is pretty dated and far from the best, even though Deezer Spleeter demixing is in RX also.
Better Windows tablet and pen support. I don’t really feel the tablet/desktop two modes do anything in Windows. Make it all touchscreen+pen capable it’s 99% of the reason to use touchscreen on software like this. Even the menus don’t correctly respond to the pen on the screen?
Basic shapes and Bezier selection. Things like selecting a police siren out of the background is quite a manual process otherwise.
Moving layers seems a bit buggy. You move it where you want it and if snaps above or below much of the time.
More colour choices for layers so things you’re not focused on can be made darker and out of focus. At the moment it’s all too bright and ends up white when dealing with multiple layers sometimes.
Get rid of the crazy ASIO behaviour where it completely disconnects when stopping transport and the connects again when playing. Just maintain the endpoints to the audio interface all the time like most software does.
Integrate native JACK2 audio support