De-Reverb: SL has some way to go... (Adobe Podcast beta)

Have you tried Adobe Podcast Beta? I’m afraid it knocks SL de-reverb into a cocked hat…

Simply stunning

(requires Chrome browser and sign into a free Adobe account)

I listened and her voice sounds more like a flanger chorus-y effect than reverb. Yes it’s technically a reverb but I would classify it more as a flanger chorus effect rather than a reverb(which is audio material usually with long tales). You can achieve a similar result using Spectralayers using different techniques. You could try “ambience matching” and “eq matching” and “noise reduction”.

Check this short zoom call. First half has James O’Malley with typical roomy reverb. Second half is the same audio processed by Adobe Podcast. It’s really a remarkable transformation.

Ahhhh okay.

I can definitely tell that is “interpolation” (meaning that it is reconstructing material that wasnt original there). I believe whatever algorithm that supposed “de-reverb” is most likely is adding post processing effects and maybe EQ and compression.

Something like this is easily achievable with spectralayers by de-reverbing and then adding spice to it by brightening up the bass and mids and then adding a little light compression to bring the vocal to the front center of attention.

To be honest a feature like this wouldn’t be classified as “de-reverbing” within spectralayers. If a feature like this were to be added then it would probably be labeled or classified as “Voice Cleaning” or “Studio vocals” or “Podcaster voice” or “Professional studio vocals”.

1 Like

Personally, I would not recognise the “AI-processed” voice as being the same as the the voice in the original. If the objective is only to fix up crappy recordings, then fine, Adobe have a market. In this case, a cheap lapel mic would have been a far better solution.

You do realise, of course, that the next question to you is going to be, “OK, go ahead and do it then!” :smiling_imp:

I think this is more in the area of speech-to-text/voice synthesis; and yes, truth is dead, and our world is doomed. Happy New Year!

I would but I’m working on a couple of projects right now. I’m also working on side projects to demonstrate how powerful spectralayers is.

Also it’s not good “go ahead and do it” and post here because the op already has the wrong idea about “de-reverbing” and confused “Post processing” with “de-reverbing” (this is more-or-so the fault of Abobe for false advertising and bad marketing practices). If I just do it and post it here then the op is going to open spectralayers thinking its as simple as a one click solution and ask more questions on exactly how I did it and not going to learn and then probably post questions asking exactly how I did it and what were the exact steps.

The best way to help someone is to tell them the truth and tell them “hey man, that’s Adobe feature is a lie. It’s not de-reverbing, it’s false advertisement. That Adobe process is not lightyears ahead of spectralayers de-reverbing. Adobe is selling you a one click solution composed of many processes and mislabeling it something else”.

With the greatest respect, I would be enormously impressed if you can achieve a similar result with SL.

I love what SL can do, but I don’t think it can do this - yet, anyway.

Please feel free to prove me wrong…

I largely agree with @Unmixing here.

From that first link posted above, the Adobe Podcast (AP) marketing mentions ‘Enhance Speech’ process/controls; one of which is a optimise/reduce ‘background noise’ level slider. I believe this is NOT a De-Reverb process (as SL understands it) alone. That is something different.

So I’d say the reference made in the title/ Original Post and comparison of the two, is a slightly misleading/unfair one.

On its own, I’d doubt SL is able to match those audio demos of the Adobe Podcast product, in a user friendly one-button process. (Ok, in AP, the Mic Placement, Gain, and Background Noise Level controls need to be set beforehand, but, you get the overall simplicity I’m indicating). However, its arguable you can come close using all of SL Pro’s tools in a multi-stage process. The recently added ‘Preview’ facility there makes for a very streamlined workflow (eliminates previous singular ‘trial and error’ methods).

Those AP results though are impressive; whether wholly desirable or not is another matter.! As with all these new-fangled ‘AI Powered’ tools, care has to be taken using them. If I’m in the middle of an ‘Airport Lounge’ reporting some VO audio piece, its normal that people should feel that same sense of space from the recording as well - this behind-the-scenes reliance on ‘AI processing’ (software doing the work for you) can all too quickly make things become unnatural/unsettlingly ‘dead’ or ‘false’ for the listener.

SL it should be said, is uniquely special in different ways to Adobe Podcast.

Interesting thread.

The issue is if SpectraLayers Pro could do it, the algorithms in iZotope RX are better and using RX Plug-ins in an Audio Editor like WaveLab Elements is likely to be cheaper and offer superior results than SpectraLayers Pro 9.


Not true… Recently RX is shifting their priorities of features towards heavy automated processes (meaning you push one button and it sounds like it was recorded by the rolling stones in Abbey Roads Studios with 10 of the best mixing recording engineers throughout the world). For example, they have an assistant tool that is based off of other processes of clean up and restoration algorithms. That is why it may appear they are far ahead but really its just a bunch of processes added together… As a matter of fact (now that I’m thinking about it), it may not even be post processing effects but a large data set of libraries of clean studio mixes (a large data set of spectral images) where they use a large data set of clean audio and try to match your input to their data sets (sort of like the idea of spleeter where they use a large data set of vocals/bass/other/drums) and it outputs a similar result to their cleaned up audio data sets.

It’s true. Try Music Rebalance in RX9 and Compare to Stem separation in SpectraLayers (e.g. For removing vocals). Also, DeNoise, DeReverb, etc.

I’ve had RX9 remove operatic vocals from music drenched in reverb that SL9 couldn’t even come close to handling well. It barely took any effort.

RX is a lot better. I have both. I’ve tried using SL and 99% of the time end up back in RX because the results are better.

iZotope can (according to you) be slipping… Whatever, but they have - at this point - years of headstart in this area.

RX gives you every process as a discrete plug-in as well, so your bolded bit is nonsensical at best. They simply are tackling other areas, because it makes sense to do so.

Right now SL needs to get the core functions to compete better. They have no room to work on gimmicks.

I can see people disagreeing with iZotope’s business model, but the DSP programming cannot be denied. RX maintains its position in the industry for a reason, esp. given the price of the Advanced SKU (more than SL9 Pro’s MSRP, even during sales).

I strongly disagree.

I strongly disagree.

Not true. Although I’ve noticed the algorithm in Spectralayers tends to lean more heavily towards the melody and it’s strength is in the melody where as RX strength is in percussion and vocals. Both fail and comes short of a complete clean extraction

The only thing I can say that RX has over Spectralayers is that the gui is a lot more fluid and more optimized. Other than that, RX is not better than Spectralayers and RX algorithms are not better than Spectralayers. Like I said, RX has been focusing their features on “one click solutions” and it seems like their assistant tool is compiled up of multiple post processing effects. Also as I said before, Spectralayers could easily replicate features like RX and call it “Studio vocals” or “podcast vocals” and create a new process under the process tab in the menu and compete in that arena.

Also consider the fact that because RX is more optimized it might give you the illusion that it is better and lightyears ahead of Spectralayers, its something that not might be subconscious to you where you might not perceive it on a surface level.

I dont believe Steinberg should invest in better “one click solutions post processing effects” and I dont believe they should copy RX. Right now Spectralayers has poor optimization and I hope that Spectralayers becomes a lot more optimized in the future and I hope the gui becomes a lot more fluid especially at higher FFT sizes and resolutions with lots of layers and hundreds(if not thousands) of selections. One of FL Studio’s strengths is its optimization. If you open FL Studio and open Edison (in spectrum view), and start scrubbing and scrolling the mouse wheel, you can see that the gui is silky smooth and fluid and I hope Spectralayers can be optimized up to that level. I use DaVinci Resolve and other video editing apps and I edit 4K video with lots of effects, and video editing seems to be a lot more optimized than working with something like Spectralayers. So if video editing can be optimized than I’m sure Spectralayers can improve in optimization. Video games like Cyberpunk 2077 and GTA 6 and Control seem to be a lot more sophisticated than programs like Spectralayers, when you factor in other things like Ray-tracing than it takes it to a whole new level, yet they are optimized to run on low end hardware so that cheap computers can run those games. Steinberg is more than capable of optimizing Spectralayers to be extremely fluid and snappy to work with (even on low end hardware).

I think you are imagining intentions and writing a narrative for Steinberg.

A DeNoise Algorithm isn’t leaning towards anything except removing as much noise as possible while leaving residual data as unaffected as possible. It’s not trying to figure out melodies and whatnot. Lol. What in the actual.

None of these algorithms are going to be perfect. People just choose whichever is better for the tasks they need them to perform.

Lol. I cannot.

Absolutely… They should clone (though work, not theft) the quality of iZotope’s restoration plug-ins.

People seem afraid of accepting when competitors are better. Can’t get improvement if we’re just going to gaslight ourselves into pretending fiction is fact.

I barely use SpectraLayers standalone anymore. Like I said, I’ve been sitting mostly in RX for that stuff. Often, I’d try SpectraLayers first, but the result out of RX is always better - without fail, and usually with less effort.

Additionally, since RX delivers its modules as plug-ins, it’s often easier to simply use an RX plug-in than deal with SpectraLayers, as well.

There is no illusion. I am not comparing UI or Application Performance. I am comparing Audio Output when run through each individual package’s restoration processes. Lol. My observations are not “visual” at all. They aren’t even related to any perceived optimization. RX’s plugins are not the lightest out there. iZotope, in general, is not known for great CPU performance optimization. They are known only for delivering good output when audio is run through their plug-ins.

So, just throwing an wasting time trying to cook up something “comparable in RX” is ~= just throwing an RX 9 DeReverb on an Audio Clip, adjusting a couple or parameters and calling it a day?

Makes sense. /s

RX’s “Music Rebalance” can’t be used as a plugin though.

I use both RX and SL (and others) but the results in my experience depend largely on (a) the nature of the source material and (b) the amount of time you’re prepared to put into learning and mastering the processes.

In RX, not too much is tweakable, but if faced with 2 hours of dialog, yes, I’m going to use RX. On the other hand, SL is much more work, but if I have a 3-minute vocal that needs to be extracted from an out-of-tune acoustic guitar, then SL for me is capable of producing better results as long as I put in the time.

Now, referring back to the original topic, I can see the market for a tool that cleans up badly-recorded dialog without requiring any technical knowledge, and I wish Adobe well. It’s just not something that would help me with the work I do, because IMHO it is not faithful to the original and is synthesising elements that were never there – presumably the rationale is that it would sound better had xyz been there.