Noise Reduction and AI Learning

Surely this is an area that can be advanced more and more to the point where noise can just be removed without the user needing to do parameter tweaks beyond maybe selecting what type of noise (street, signal, RF, etc) and the user could even check multiple.

I was thinking, users should be able to build their own library of noise learning samples since, at least for studio users, there will be some level of recurring noise samples, ie, same room, same mic, same cable length, same mic pre, and same noise problems if the studio has any.

And potentially, there could even be a shared user database of noise samples. Train Rumble, Wind, street, nature, birds, RF, signal, tape, microphone models, mic pre models.

Volume and signal to noise ratio is the only variable which the AI could also learn.

Where I currently run into problems with noise reduction is in decay tails. problems with artifacts, HF signal decay that should be there getting removed or partially removed, higher harmonics of LF signal being removed.

I would like to see “Natural Noise Decay” or “Smooth Noise Decay”.

Maybe being able to select what the primary sound source is in the noise reduction model for example, “Male Dialog” vs “Female Dialog”, male dialog can have a lot of LF decay and resonance.

sorry for the scatter of FRs

1 Like

Whats wrong with “Noise reduction”? It works just fine to me.

Fine in what context? What work are you doing. Sure it is “fine” but it could also be better… How experienced are you in this field - have you ever used RX Advances Noise Reduction? It offers a lot more.

The biggest problem is accounting for decay times of different frequencies and where noise floor meets the low volume decay of the sound source. This is an area that can be improved As one example.

+1 for hi-quality AI noise reduction

+2 for hi-quality AI noise reduction…

1 Like

It would indeed be nice to filter trafficnoise in very high quality.

I think for the most part that’s going to be a waste of time. I denoise in post all the time and even room tone, same room same day same “everything” sounds different ‘two hours later’. For sure things like nature sounds won’t work with a sample library if this is all done using some sort of inverted sample that’s applied (for lack of better nomenclature).

If anything an AI process trained on sounds like that would be valuable since it could figure out what is likely birds and what isn’t.

I’m not saying denoising in SL couldn’t improve, but generic / general samples of noise won’t help I think.

I agree. I mean we have a plethora of NR algorithms that sample the actual noise that should be removed ( SL layers too ) and results vary from sound to sound and tool to tool.

Perhaps results might improve in time, but I for one think the results we are getting currently are miraculous … coming from the perspective of doing this work for 40 years ago where a bass roll off was about the only tool available to clean up a track! I understand that if someone has joined the trade in the past decade or so, then what we have now is the baseline! Of course is seems like it should get better. But remember there is a point at which even Human AI ( ie Actual Intelligence?) has trouble seperating ‘signal’ from ‘noise’… have you ever had you ask " what did you say" to someone? I expect a pattern matching machine would reach a limit before humans would.

As I said it’s very good now, but as with a lot of processes there’s the ‘last mile’ problem… it’s hard!

1 Like