What songs would be difficult to unmix?

Lately I have been searching endlessly trying to find complex music that would be difficult (or at least a challenge) to unmix and I have been struggling to find good examples. Can you guys recommend me (or at least) give me a good list of songs/music that would be difficult to unmix.

I’ve asked this question on another forum and someone suggested “eurorack patches/modular synth patches” and so far that has been pretty easy for me to unmix. I would prefer something with a lot of complexity to it (maybe something that has a lot of tonal aspects like atmospheric pads mixed with arpegiated polyphonic tonal partial textures).

John Cage’s 4’ 33"

1 Like

:smile: That’s way too difficult to do. I wouldnt even know where to begin.

That dubstep remix cover version sounded a lot more reasonable. :laughing:

How about the Beatles Tomorrow Never Knows. But the old stereo mix because Revolver was the last record where they’d record multiple instruments (e.g. bass, drums, piano) live onto a mono track which then maybe got further bounced on the way to the mix. Then you can compare your results to what Giles Martin did using Peter Jackson’s music AI tools on the new mix.

Some songs may be more difficult to unmix than others, depending on the complexity of the recording and the techniques used in the mixing process. Here are a few examples of songs that might be challenging to unmix:

Songs with a lot of layers: Songs that feature multiple instruments, vocals, and sound effects may be more difficult to unmix because the different elements are often intertwined and may be hard to separate.

Songs with heavy effects: Songs that use a lot of effects, such as reverb, delay, and distortion, may be more difficult to unmix because the effects are often applied to multiple elements of the mix and can be hard to separate.

Songs with complex arrangements: Songs that have complex arrangements, with multiple sections and changes in tempo and key, may be more difficult to unmix because it can be challenging to separate the different parts of the mix.

Songs that were mixed using advanced techniques: Songs that were mixed using advanced techniques, such as parallel processing, bus compression, and stereo wideners, may be more difficult to unmix because these techniques can make it hard to separate the different elements of the mix.

Acapella tracks with no instrumental backing: Acapella tracks that were recorded with no instrumental backing, they might be difficult to unmix as there is no instrumental track to separate from the vocals.

It’s important to note that unmixing a song is a complex process that requires a high level of skill and knowledge of audio engineering. It’s also important to consider the legal and ethical implications of unmixing a song, as it may be considered copyright infringement.

@raino I did end up unmixing “The Beatles - Tomorrow Never Knows” but I cannot upload it here nor post it here on the forum. This forum doesn’t allow me to upload no .slp extensions… Also when I try and attempt to upload the raw music files it tells me file size is too large.

Very frustrating when you want to demonstrate how powerful spectralayers is.

I’m hoping one day that spectral image files would become the norm for music because this is a problem I have been dealing with for decades. It is very hard to share raw wav files online.

I think for unsupported file types if you zip them you can post that. And then there are also copyright restrictions.

Exactly. Thats what I first attempted to do initially, but the file size is over a little over 700 MB and there is a 25mb limit. I even attempted to do another video but the Asio drivers (from Steinberg) literally crashes every few second when I use DaVinci Resolve.

I even attempted to convert the stems to mp3 but I ran into another technical issue (related to audio device drivers or windows)

Very frustrating. :rage:

If you’re looking for a song that would be difficult to unmix in SpectraLayers, try ‘Up, Up and Away’ by The Fifth Dimension (1967). It’s an absolutely beautiful song but what makes it difficult to unmix is that the vocals - particularly the choruses - are almost orchestral and I think that SL thus confuses the choruses as being instrumental.

@condex

I listened to it and it sounds fairly simple and easy to me. I’m not going to unmix it because it doesn’t feel like a challenge to me.

What I mean by difficult is audio with a lot of processing and effects and variations. For example, a lot of music today has a lot more modulation and post processing effects like filters. For further example a lot of EDM music has vocals where it is being heavily filtered (to the point where the best source separation algorithms and vocal isolations mistakes it as an instrument). Another example is in EDM/trap the vocals literally morphs into an instrument, there is a plugin called Serum where musicians use it to morph different sound sources into another (really, its just a bunch of fancy cross fading but it sounds cool nonetheless).

Another example would be eurorack/modular synth patches. Some of the best music I have heard so far (in terms of the past 2 years) is eurorack and modular synth (dawless) music. I have heard some of the most beautiful music from these eurorack musicians who play live shows and it impresses me. If you listen to a lot of those eurorack/modular synth patches you can hear that there are so many layers and textures and it’s hardly 1 dimensional (meaning there are so many dynamics to it, whether the timbre morphs into another synth or a patch is heavily filtered).

What I mean by difficult is something like music with heavy atmospheric pads with vocals stacked on top of them. I’ve noticed that the music with heavy atmospheric pads that are intertwined with vocals almost always fails the best vocal extraction algorithms, and the reason that is is because the grains/pixels within a sound like an atmospheric pad are so small (and the texture of an atmospheric pad across the spectrum is so neutral) that the best algorithms mostly (9 times out of 10) mistakes it as a vocal (or at best vocal noise profile)

My apologies for making that suggestion.

1 Like

I tried that. The results on the drums were incredible.

@Eddie_Stealth_Studio Incredible in a good way or bad way? What do you mean by Incredible?

For folks stumbling on this - It’s a joke because 4’33" has no drums in it - or any instrument really.

John Cage spent some time inside an anechoic chamber (like on the cover of Bowie’s Station To Station) which is a total audio isolation chamber. He had been expecting to hear silence. But instead experienced a roaring sound after being in it awhile. When he got out they told him what he had heard was the sound of blood rushing through his eardrums. This was a pivotal moment for Cage artistically as he realized that true silence didn’t really exist. So he wrote his most famous piece 4’33" which if I recall correctly is for piano but no notes are played - so it’s a ‘silent’ piece. The point being that the music was the sound that occurred in the performance space - audience rustling, a siren in the distance, coughing, whatever. At the first performance some in the audience threatened to run Cage & the pianist out of town. Interestingly the piece takes longer than 4 and a half minutes to play because it is in several movements and there is some time spent between them.

1 Like

10 cc “I’m Not In Love” I’m guessing would be hard to unmix …

And Beatles “Revolution #9”, for different reasons

Far from an expert SL user here, but I’ve found that even simpler songs can still pose challenges if multiple instruments are present that significantly overlap each other’s frequency ranges. Take Derek & the Dominoes version of “Key to the Highway” as an example. The piano, guitar, & cymbals on that track overlap each other considerably. SL10 does a MUCH better unmix of the guitar layer than SL9 did, however, fails completely to unmix the electric bass layer from the guitars, which is a little surprising since (after running “unmix song”) I could manually do a simple frequency range cut & paste to get 90% of the bass line intact. SL10 also does better with the cymbals, however, the piano unmix on this track left quite a bit stuck in the “not unmixed” layer. I have no way of knowing what recording techniques were used to capture the (analog) original but do know that I was working from a digital remaster. Regardless, I’d expect that recording techniques & media would play a large role in a song’s un-mixability; perhaps some more experienced hands reading this could comment further on this topic?

Because I know how these datasets work, the reason why the algorithm wont register that mid-tone bass as a bass is because most of these datasets registers sub bass as bass and not all basses(such as a muted guitar bass). To further explain, these datasets are compiled from a lot of music today and a lot of music today is a lot different than music that was made in 1970’s, so the bass of today is a lot lower and a lot heavier as-oppossed-to the bass in 1970’s that were a lot less sub heavy and more-or-so in the mid-tone frequency range.

A quick tip (might have to do a tutorial on this): The lazy way to get the algorithm to register that mid-tone bass as a bass is to transform the audio down in frequency or pitch(Frequency Shifting is preferred if you want results as non-destructive as possible). The first step is to duplicate the layer then solo it then transform the layer(either by frequency shifting or pitch shifting by semitones). When you transform the layer remember the number you transformed it by because you will need the bipolar opposite number in order to safely work in a non-destructive environment. So for example if you transformed the audio down to -300, you will need to remember that number in order to transform it back to +300. After you transformed the audio (for example frequency shifted it down to -300), try to run the unmix bass process and see what happens (you may have to play around with it in order to see what works and doesnt). When you get a clean unmixed version of the bass, transform that bass layer back to +300, then unsolo everything. Reduplicate the original audio layer, use the layer (that you just transformed to +300) to carve out and cast out the original bass. When the bass is casted/carved out, turn on the phase and merge those 2 layers and you should have a clean non-destructive bass.

2 Likes

This is a really useful idea, thanks for the tip :blush:

Ps. Would you select preserve formant at any point when doing this ? or is that really just for vocals etc. ie, human voice…. A while ago, I had a 15 ips tape that I wanted to listen to, could only play it back at 7 ½ ips on my gear. I digitised it and doubled the speed in Spectralayers – didn’t sound quite right, when I checked preserve formant it did. (I knew the eq would be off due to playing it back at 7 ½ ips, but I think this was a separate issue).

Like I said

So “your mileage may vary”. It’s up to you to see what works and doesnt. Me personally! I would try both options and compare both the results to see which one works besr for you. Typically because bass sounds dont have as much overtones as other sources(like vocals or strings) it wouldn’t hurt to leave it off (but then again it wouldn’t hurt to try and see what works and doesn’t work or what sounds best to you). Some bass sounds/sources have rich overtones (like the strated muted guitar) and other bass sounds have powerful chunks of partials and tonal elements.

1 Like