It’d be great to see a huge increase in the number of stem categories available - imagine being able to break an orchestral piece apart into the individual instruments
You might as well just start from scratch using orchestral samples and a DAW if you want that degree of control.
Well: what almost seems like science fiction that’s happening now may not be that far away from what im asking for; although admittedly the question is quasi rhetorical. The real question is how far can the tech be pushed? And an even more interesting question is this: can the software be taught what same sounds like ‘a sound’ to look for in/separate from a mix into it’s/their own layer(s)?