Separate multiple voices using multiple tracks

Hey guys I’m editing a podcast between 2 men both using individual mics. So I have 2 audio tracks but unfortunately they were close to each other and the room is very echo-y so I can hear both speakers in each track. I tried the Unmix Mutliple Voices in SpectraLayers 10 but it didn’t work very well cause they have pretty similar voices. I was thinking that Spectralayers could scan both tracks and recognize that they’re of the same recording, but that each track has a worse version of the other because of distance, echo etc and then use that information to separate the voices, do you know what I mean?

1 Like

Try the process ”De-bleed”.

1 Like

@Alan_Parker

Try voice denoice and use other voice.

1 Like

Hey @Unmixing thanks for the tutorial it’s interesting. Unfortunately I don’t think I can use that process cause I have over an hour of footage to process and your tutorial seems very precise and localised.
@Marshall de-bleeding does seem like what I’m after pretty much. However I am failing to get it to work well. It almost removes as much of the wanted voice as it removes from the unwanted voice. Maybe cause they are pretty similar and so their frequencies are close. If you know a good tutorial about debleeding I’m keen, the Spectralayers ones I found on youtube only got me so far.

1 Like

I don’t know of any tutorial I’m afraid.

Are your audio files perhaps stereo files with one voice being louder to the right an the other one to the left?

As long as the two guys aren’t speaking at the same time between the two tracks of the hourlong project, you at least have numerous manual ways to separate them and gate the inbetweens.

@Marshall No that’s the crux of the problem I feel like the mics might have some kind of compression cause everything is at the same level all the time no matter how weak and far. Both tracks kinda peak when anyone is talking. The only difference is the quality of the voice, which is better when it’s closer cause less verb.

@DosWasBest if I did it manually yes but the point was to find a way to do it automatically on the whole track. Like I click a button and boop based on the comparison of the 2 tracks SpectraLayers knows when one guy is talking and when the other is instead, and it can generate one track each. But because both guys are on close frequencies s it doesn’t seem to work too well. The reason I want it to be automatic is not only lazyness btw it’s a video podcast so I’m gonna have to review the whole footage anyway, I’d rather not do it twice. I was just thinking of letting autopod do a first pass I could then refine. But for that I need audio tracks that are exclusive to each speaker. And right now they’re too infected with one another.

If you have any suggestion of how to go about it I’m still very interested :slight_smile:

Yeah! Like I said, I highly recommend watching THIS! Also I would recommend denoising. If the podcast is way too long for manual work, I would highly recommend selecting all the parts(using the time selection tool) where only one speaker is speaking (or at least where it is most dominate) and then duplicate the layer and then (learn noise profile to) denoise it. Once you have the main denoised layer, use that denoised layer to phase merge into the original layer by duplicating both the denoised layer and the original layer and then flip the phase of the denoised layer and merge the 2 layers together.

If you have a short audio sample of both audio tracks at its worst I can try something with, I might be able to help you. I have a specific MDX model related to this crossover mic issue, but it’s not public to share yet.

2 Likes

If you drop me an email at anonymousalanparker@protonmail.com we can continue in private. Thanks!