Feature Request: More Post-Production modules

Hey there,

I know version 11 has just been released with some great additions for Post-Production (Modules Panel / Modules Chain / Batch Processing), but there are still some essential tools missing for Post-Production, which I hope to see in the future:

  • De-Wind: Remove unwanted wind noise of field recordings
  • De-Rustle: Remove cloth noise of Lavalier microphones
  • De-Plosive: Remove pops of dialogue

With RX’s disastrous and embarrassing implementation of ARA2 support (only 3! of its more than 25 modules will be available?!?) there is now room for SpectraLayers to take over the throne of audio restoration. Therefore I hope to see more modules tailored towards Post-Production in the future.


More Post-Production features would be awesome. I’m also looking for an RX alternative in Post-Pro.
Unfortunately, I’m getting the impression that now SL is focused on modules related to unmixing music tracks.

1 Like

Don’t worry, I try to equally push SL in music and post-prod with each new version. For instance SL11 introduced Voice DeClip, Unmix Crowd Noise, an improved Voice DeNoise, and an improved Unmix Multiple Voices.
Requests duly noted :slight_smile:


It’s really refreshing to see a developer so active and engaged on the forum. Great job @Robin_Lobel Good to hear more Post-Production features will come in the future. Looking forward to ditching RX in the future all together and solely work with SpectraLayers.


Great. I agree with the others.

May I add three that I think are important:

  1. De-warble, or whatever you would call it. A plugin to address poorly recorded / encoded lower res files that tend to warble. Like bad mp3s or phone recordings. Happens far too often these days.

  2. A module that generates both fundamental and overtones from a bandwidth limited source. This is related to the above. Once the artifacts of the former are removed any signal that is bandwidth limited needs new high-end content generated. Same with fundamental to get back the “body” of the voice.

The second one is pretty important because it allows us to process problematic audio much harder and then just regenerate what we’ve cut out. Rustling is a pretty common problem since once that’s out the high end tend to disappear as well. So we can “nuke” the dialog and then regenerate to fix the problem we created when we fixed another problem.

  1. Voice cloning. I really think you need to look into this, pronto. The future I bet will be that we take noise prints of an existing person and then apply that voice to someone else. This would allow us to take good sections of dialog for training and then just re-record ourselves a better, clean take that we can simply clone.

Think docs, lifestyle and reality. They cut together an episode using a text editor and the NLE then follows to cut together actual footage. Pitch, intonation, intensity is all over the place in one single sentence at times. We could just re-record real quick and apply the clone and we’d be done.

Btw - a function like this would make other functions in some sense obsolete. If there is problem with all of the above - rustle, poor recording, poor editing - then a simple new recording and cloning would fix it all in one go.

Consider that this is surely the future and sooner or later this will make its way into NLE’s, and surely we want to stay ahead of that before we become obsolete…

1 Like


I agree with everything esp the 3rd. But to use SP11 for Post some PRE needs urgent fixing.
Foe example, VST3 plugin setting are not saved when you save the module chain or VST3 chain…Says you set Waves F6 set to highpass and Waves Sibilance and save it as a preset settings by the name ‘ Wave’, and when you want to reapply the settings they seem to reset to default.
Or for example you want to use RX-declick, Mouth declick, deplosive followed by Waves Sibilance and Waves F6.and save it as ‘Pre-Process Chain’ preset in the module chain you loose the settings making the preset useless as you end up redo doin every settings everytime you want to use the preset.,
May be it is normal behaviour or a fix is already on its way in the next patch.

Well done to @Robin_Lobel for all his efforts in getting SLP11 released…

The (great sounding) feature requests posted here for future post-pro consideration, seem like they deserve Robin having access to examples of what you guys are actually dealing with in your very real-world scenarios, however long or short.

I can imagine this help focus ‘solution’ efforts from his end… i.e. not (just) leaving him with his ‘interpretation’ of what you specifically have in front of you… only if it’s safe/non-sensitive material, of course… :wink:

@Puma0382 Good point indeed, real audio examples from real uses cases goes a long way to help shape a new module or refine existing modules !

We can provide complete sets of interviews (in Hindi) of various lenghts if that can help.
We have done some test and are really loving the way things are evolving.
We have also done some comparative hands on videos anticipating some of Mattias NLE fears in How good are the AI Dialogue cleaning tools of Davinci Resolve 19, Final Cut Pro 10, Nuendo13 with Waves Clarity, Izotope Dialogue Isolate and SPL10 thrown in the mix.

In reality the problematic clip was run through Apples FCP’s AI assisted module by exporting several passes and opening them up in SLP9. Then using the excellent spectral selection tools we manually refined and fine tuned the voice print, and in the end using the same tools extracted and merged the ambience back. We ended up using roughly 20 layers in SLP9.
It took us 2 days just to fix a few seconds of dialogue and the same in SPL11 took less than 10 minutes.

Disaster strikes you where you least expect. And here something like Voice Cloning would be a perfect candidate for rescue and restoration of the soundbite in question.

The 5 part SLP11 quick introduction to post production workflows that we are producing with actual realworld projects where Part1 - How to UnMix and re-balance a badly recorded folk song and Part2 - How to rescue a badly recorded Interview are done but it appears that Part3 might get delayed as mentioned in my above post as we wait for a fix.

I use a lot of healing processes
This feature is very helpful, but it is a pain to access from the menu
Could you please make it a module?

1 Like

Before I would have agreed to something like this but as I matured I realize this is not a good idea.

I would not consider this because a feature like this could easily cause Steinberg to go bankrupt. It’s a LIABILITY issue.

Here’s a clear demonstration of why this is not a good idea!

It’s just not a good idea, at all

It’s not going to be a liability issue. The companies that are getting sued are the ones that are training their AI software on other people’s intellectual property.

The use case I’m putting out there is different. In my case a production company has a deal with a distributor (normally) and I’m hired by the production company to work on content that they either own or have the rights to manage. The voice I would be cloning is the voice I’m provided by that production company. They have a responsibility as far as IP goes, not me. I would train the voice(s) they provide to reproduce the same character / actor in the same situation.

In other words, I get a TV episode and someone has been edited to say something they will say that thing. The only question is how good will it sound when their voice says those words (?). I can either do the best I can with really bad edits (not good) or I can re-record and clone and then the same voice is saying the exact same thing it would have said anyway but sounding a lot better.

The only liability issues I can foresee is if the talent thinks they’re losing out on work because someone else is re-recording, but the types of jobs I’m talking about are jobs where the talent doesn’t want to do this anyway. And as I said, the only other issue is with the production company, not me.

I really don’t think Steinberg has liability here. So far people aren’t sued for the capability of their software to clone, it’s for what they do with the software. Steinberg wouldn’t do anything with the functionality, just offer it.

1 Like

Hi Robin. Agreed: your efforts addressing issues here are nothing short of amazing. If only the world had more involved people like you in it but I digress.

I’d like to request a feature to correct R2R and cassette tape speed issues currently addressed by Celemony Capstan. For instance, some battery powered recorders changed recording speed progressively as the battery wore down during a long session.

As you likely know, Capstan resolves the problem by correcting a subtle pilot frequent frequency (400 Hz?) from drifting thereby ensuring consistent speed. However, I’m not certain whether cheap, voltage driven recorders of the era even used pilot signals. Then, of course, quality recorders might have had mechanical issues during one-off recordings i.e.

  1. record a session
  2. place tape in box without quality check

Might it be possible for SL to include a module to correct tape speed drift? Eg. A4 is 440Hz at session start and ramps down to G4 392Hz an hour later as the battery runs down.


Until you’ve been through numerous litigation processes and have gone back and forth to court, you wouldn’t understand. Going to court is hell, for example I myself and going through multiple cases which unfortunately have dragged on for years. I am currently in about 7 lawsuits and each one is (simultaneously) exhausting.

Believe-it-or-not, did you know that not only Steinberg could be held LIABLE but the main developer could be held LIABLE as well? I always believed that LLC’s and corporations were different and I aways believed that corporations were designed to protect certain individuals and entities from bearing the full responsibility of being held responsible (such as sole proprietorship), it wasn’t until I consulted with a lawyer/attorney (in-regards to something else) that I learned that any ENTITY(It doesn’t matter if it’s a LLC, Sole Proprietorship, anyone with a tax ID number, anyone with a personal checking account) can be held LIABLE. In Steinberg’s case, a feature like this could make them extremely VULNERABLE to lawsuits which would (9-times-out-of-10) cause them to lose. The main developer can be held just as LIABLE because it’s public knowledge who the main developer of Spectralayers is.

Here’s a prime example of how someone who had nothing to do with anything is still held liable for something they didn’t do.

That is not true.

He was found guilty of a direct crime, defamation, nothing indirect about that at all. The actual damages caused were indeed caused by other people but they caused said damage because of his defamatory speech.

If what you’re talking about was true then you would have to explain why the creators of the technology weren’t sued in the other lawsuits you mentioned, just the users of the technology.

The core argument according to the reports I’ve seen is that the specific companies being sued are using others’ IP to train their AI. So they’re creating derivative works rather than engaging in fair use. What Steinberg would do is nothing of the sort. Again, Steinberg wouldn’t do anything, just offer a technology. And not only that, the only potential complaint to be made from what I’m suggesting is the use of someone’s “likeness” which again has nothing to do with Steinberg and everything to do with usecases.

I’m not going to argue this more here, you can use the AI-thread for that if you want to continue. We’ve derailed this one enough. I just think you’re fundamentally wrong in your analysis.


It’s obvious you haven’t had much experience through the court system and probably haven’t been through enough litigation to know better. It doesn’t matter what he was found guilty of, the point that I made is that he was held LIABLE(Keyword LIABLE). He could’ve literally been charged with lying to the feds (which is a petty crime), the point is that he was held LIABLE.

The former President 45 is being charged with petty crimes (crimes that wouldn’t normally make it’s way to the DA’s office to be taken to trial because they are not considered crimes that are a danger to the public nor people’s well-being), because he is being held ACCOUNTABLE/ LIABLE for his actions.

A feature like “Voice cloning” not only puts Steinberg in an extremely VULNERABLE position to be held LIABLE but it also puts the main developer in a position to be held LIABLE because it’s public knowledge that the main developer implemented that feature… Lets say (for example) someone abused that feature(“Voice Cloning”) and used that feature to play a prank on someone and someone ends up dead because of it, people will go to great lengths to hold whoever is RESPONSIBLE/ LIABLE/ACCOUNTABLEfor aiding in that prank. Let’s say for example someone uses that feature to impersonate a police chief captain and does a prank by ordering a swat team to do a raid on a house and when the swat team does the raid on the house, someone from the swat team ends up shooting and killing the homeowner (because the homeowner not aware of the raid believes someone is breaking into their house and instinctively grabs a gun to defend themselves). When the smoke clears and the investigation is done, the police will probably come to the conclusion that this was a hoax/prank gone wrong but now someone is dead because of it. Now the family (enraged and heartbroken) will go to great lengths to hold whoever is RESPONSIBLE/ LIABLE/ACCOUNTABLE for their loved one being killed (because now there is a serious loss here and someone lost their life over a prank). All the lawyers/attorneys would have to do (in an open court) is say “see! These A.I. programs are a detriment to society. These A.I. programs have not only caused harm but now someone is dead because of it. We have to hold someone accountable otherwise this will continue to happen”.

I’m getting the feeling you’ve been in the court system but not as a lawyer. If you were a lawyer you would be able to parse that distinction between adjective, verb, noun and possessive noun (my re-emphasis above).

I’m going to put you on ignore now so we can focus on the actual thread topic.

Hey, I didn’t read the thread yet…but…

the toughest thing for me to reduce/repair is RF interference with a wireless Tx/Rx…even with SL is very difficult to repair/reduce…

Yes, I told the director we couldn’t use that degraded audio (that I recorded), but there was no convincing possible

I haven’t tried SL11 on it yet…we’ll see…BUT, I would put that at the top of the list

1 Like

Maybe post a 30sec sample for us SL 11 folks to test?

1 Like