Noise Reduction and AI Learning

Surely this is an area that can be advanced more and more to the point where noise can just be removed without the user needing to do parameter tweaks beyond maybe selecting what type of noise (street, signal, RF, etc) and the user could even check multiple.

I was thinking, users should be able to build their own library of noise learning samples since, at least for studio users, there will be some level of recurring noise samples, ie, same room, same mic, same cable length, same mic pre, and same noise problems if the studio has any.

And potentially, there could even be a shared user database of noise samples. Train Rumble, Wind, street, nature, birds, RF, signal, tape, microphone models, mic pre models.

Volume and signal to noise ratio is the only variable which the AI could also learn.

Where I currently run into problems with noise reduction is in decay tails. problems with artifacts, HF signal decay that should be there getting removed or partially removed, higher harmonics of LF signal being removed.

I would like to see “Natural Noise Decay” or “Smooth Noise Decay”.

Maybe being able to select what the primary sound source is in the noise reduction model for example, “Male Dialog” vs “Female Dialog”, male dialog can have a lot of LF decay and resonance.

sorry for the scatter of FRs

Whats wrong with “Noise reduction”? It works just fine to me.

Fine in what context? What work are you doing. Sure it is “fine” but it could also be better… How experienced are you in this field - have you ever used RX Advances Noise Reduction? It offers a lot more.

The biggest problem is accounting for decay times of different frequencies and where noise floor meets the low volume decay of the sound source. This is an area that can be improved As one example.