I’m using MVSEP on my latest project which is a one hour live project.
First a technical introduction for those who bother to read…
This project was a stereo live recording from 1987. It’s not a good recording by any stretch. It was recorded with two line level microphones, Milab LC-25LL, into a Milab 48volt phantom power, and then into a consumer converter, Sansui PC-X1 and stored on a portable Sony Beta Video Recorder.
AD specs: 44.056kHz, 14 bit; which is actually between 11 to 13 bits on a good day. It was a very poor sound quality – but, that’s what I had…
In the summer of 1998 I copied all my recordings analog to my (then) new Ensoniq Paris before exporting to 44.1kHz, 16bit audio CD for storage
(– and wishing for a technological development, as we have now).
After importing the AIFF files I converted everything to 44.1kHz, 32-bit float, wav files in Cubase. This is also my working format for the entire project, which will later be edited and converted to 48kHz, 24bit master…
– Why 44.1kHz? MVSEP does not deliver higher than this sampling frequency. As far as I understand, it has become a kind of defacto standard on most/all separation models… I first tried to convert to 48kHz along the way, but eventually thought that there was no point - started everything over again in 44.1kHz…
This production was initiated by a local small town choir with a hired baritone vocal soloist and musicians; grand piano, bouzouki/guitar, flute, violin, cello, double bass and tambourine and dancers.
The show was 18 songs from Theodorakis in a dramatized story with ballet dancers.
The set design was; musicians in a small space on the far left of the stage - the choir, the soloist and the dancers moved around as the show required…
This time I have separated all the instruments and vocals with MVSEP, except for the bouzouki and guitar in Logic Pro, and I think everything turned out magically well (under the circumstances).
– Today I discovered that MVSEP had a model that separated vocal solos from choirs. I had no illusions that it would work on such rotten sound (or if it could work at all).
But – it turned out an incredibly good result. I have only separated one song with this model so far. It was extremely impressive.
I do all the editing in SpectraLayers, where I put all the tracks in their own layers in SLP and finish them before exporting them to a transit folder – from where I import into Cubase 14.
I really love editing in SpectraLayers ![]()
The purpose of all this: To be able to offer this show to be heard in a restored and revived experience within a concert staging.
I have received great help from ChatGPT with the scenography; instruments, soloist and choir placement in both width and depth.
– What a lovely hobby… ![]()
I hope this can be of interest to like-minded SpectraLayers users…