Selection of frequencies based on stereo spacing position. A specific instrument can be placed in any place in the stereo image, but lead instruments often are centered or in the early days left or right positioned. It would be nice if an algorithm could pick all frequencies that share the same spatial center simultaneously.
Yes, plus extend the concept to include in the algorithm the stage to “pick similar frequencies that share the same spatial place simultaneously”. So to locate all sounds that actually stay at certain spatial location, as an additional tool.
I will double this feature request
To separate the choir from piano is a very sophisticated task. And the programmers do a real good job. If they can do such a ambitious task, I wonder if they cannot separate acustic signal out of stereo field.
The correct terminology is “source separation”. It is the task of identifying 2 different (or more) sources in the frequency spectrum in time. I believe the correct way to approach this problem is not “stereo separation” but by improving either the harmonics tool to identify 2 different sources (based on a sum/algorithms of harmonics and the timbre. Meaning a smarter A.I. tool that can identify the fundamental frequency and be smart enough to indentify the other frequencies that are linked to it while rejecting the other frequencies), or a pitch/melody tool that can identify two different sources where the user decides what harmonics/overtones belongs where.
I was able to do it Here (tap here if on a phone or click here if on a computer) by process of elimination. However I have no idea if the harmonics I selected were from one source or another. I just did what sounded best to me and I did it all without interpolation (meaning moving the bits/pieces back to other layers where it sounded best) . Also, as you can hear, there are 5 vocalists singing in the same unison harmony within the same frequency and are (technically) all lead vocals, so my example above clearly demonstrates why “stereo separation” would be useless.
@Unmixing: grate job!!
I will try to follow your suggestions and examples. But it’s not that easy - isn’t it?
so my example above clearly demonstrates why “stereo separation” would be useless.<<
I do not agree. It would be much easier to handle.
Well, I am not a developer, at max an amateur sound engineer with limited knowledge. This should not be about what the right term is although “source separation” sounds good.
The starting point in my case is a stereo file that contains several instruments (in most cases no vocals). In general a guitar band with bass, drums, rhythm guitar and lead guitar. Often there are other instruments, like synthesizer, strings orchestra, etc… These files may be mixed in the 60s, 70s, 80’s, etc. and I noticed that this sometimes makes huge differences in the separation process.
SL is a perfect tool but does not do the job always as I would like it to do.
So, to get an improved result I combine it with an old Roland tool called R-Mix. This tool is able to isolate an instrument in a square/rectangle or circular/oval window of the stereo image, with a time base (x), frequency scale (y) and intensity level/amplitude color (z).
Disadvantage of this method is that the window contains all frequencies in that location. Advantage is that instruments with a small spacing can easily be isolated.
What makes it difficult is that the lead guitar often has reverbs over a large part of the stereo image and sometimes in a different part of the stereo image. This cannot be corrected good enough by Reverbs reduction, although the result is acceptable in a number of cases.
My method is:
- unstem/separate a file to all five layers, including vocals despite it contains only instruments (setting vocals=balanced).
- judge the layers for lead guitar residue, manual correction may be needed.
- In a lot of cases the lead guitar has a lot of residue in the vocals layer, manual correction may be needed.
- The “other” layer in general contains the main part of the lead guitar and all other instruments except for 80-90% of bass, drums and a smaller percentage of piano (depends on mix and type of piano).
I use the the “other layer” to separate the lead guitar from the rest with the R-Mix and accept that it als removes some parts of other instruments. Then I import it as a new layer and treat it manually if needed.
If SL could do a 100% source separation (all sources) then there is no need to use the stereo position for an instrument. However, that is the ideal situation and I don’t believe we are so far.
Additional source separation based on stereo width/position might give us an extra tool.
My audience is people looking for backing tracks for instrumentals (Shadow Music Backing Tracks | Facebook) (50% or more of over 450 BTs done with SL). That audience is not very critical as they accept lead guitar residue in a lot of cases, personnally I am happy with it but not satisfied.
Thanks for the hint to R-mix. That is what I was looking for. The result is not absolutely convincing, but for the purposed task it’s ok. I will extract a single voice from an audiofile choir. E.g. make the Alt-voice 30dB louder then the rest. That works so lala.
I would double your suggestion:
I use something which I think is similar to R-mix, zplane’s Peel.
Yes I know z-planes Peel, I have tested it. There are more like the freeware Mash Tactic that enables you to use multiple windows simultaneously. For me R-Mix works best.
Peel works good for me - thanks for this advice
Mash Tactic works also - thanks
Both work in WaveLab11 pro
Peel not work in Cubase11 pro