@superblonde We must be birds of a feather…
Not just tags - the rest of the problem for me was the need to generate stems, alts, and cutdowns (cuts) according to the specific file naming conventions and audio standards a given client has for each of those artifacts.
I ended up writing a (mostly) automated production pipeline in Python. First step is to have Dorico (or a DAW) export its master mix and each track as a high quality.WAV in the normal way with no special parameters to remember.
Actually, I suppose that is not the true first step. I have a spreadsheet that contains a row for the preferences of each client. I also find I can default into that spreadsheet the values of certain tags like the composer (me), default split, etc. I also store my pro, IPI#, and contact info, which may later be assembled into the comment tags or file names based on whatever the client wants. This is a one time setup per client though - so for any given piece, I export audio from Dorico and just specify the client when I kickoff the script. I probably should make that one combined step from Dorico at some point.
Script reads the correct row from the client spreadsheet and the previously exported audio file name/directory structure, and uses that to first create an entry in a tracking spreadsheet before stopping (the first time) This first run I call seeding,
I need to keep track of the pieces anyway, but from this point on the information it contains will be used to control the process - so I need to make sure it is right, and fill in any gaps.
For Dorico’s standard directory structure of “flows from xxx, it will have already inserted a new row with xxx as the album name and the flow name as the track name (after cleaning them off a bit) with the aforementioned client preferences and defaults.
I too thought hard about exporting music xml and reading the Bpm, but decided it wasn’t worth it right now. I enter bpm , mood, genre, etc. as needed in the tracking spreadsheet and kick off the script again.
For stems, I have an automatic (configurable) way of grouping tracks into stems based on their names . One reason I used python was the ability to mix those audio tracks together and create new stem audio tracks, Grouping into stems for strings, brass percussion etc are sort of obvious and don’t change a lot - if you have an oboe - it always goes in the woodwinds stem.
Alt mixes are done using similar configurable groups based on track name, except I use a track/instrument naming convention of my own to single out which are solo or prominent melody tracks so I can have the script create underscore alt mixes, and not just the more obvious ‘percussion and bass’ alt mix or whatever. It’s also how I deal with electronic/synth tracks that may fill different roles.
I can explicitly edit the stems and client requested alt mixes for a piece in the tracking spreadsheet. Mostly though I use the preferences that come from the client spreadsheet untouched. I can reseed the tracking spreadsheet if the client changes the standard, or if I wrote a piece not knowing which client might pick it up.
Cuts are another reason I had the script stop first time. Say you have to deliver the master plus stinger, 15 second and 90 second cuts. I enter each of those using time codes in the tracking spreadsheet. . Again, python can take a snippet of the audio say at the beginning, and concatenate and crossfade it with some other section like say the last bar to create one of the cuts. I built it so I can specify the location(s) and if needed the cross fade - but generally the default crossfade behavior is seamless.
What I do is create hit points in Dorico, and transfer those time code values into the column for a given cut in the spreadsheet as start-stop;start-stop. If we could READ the hit points in Dorico - that would be awesome, and if we could get all the meta data fields… but we may be talking a niche need here.
Run the script a second time and it creates however many masters, stems, alts, and cuts outputs it needs in whatever audio formats, (including aiff) sample rate, etc. with whatever tags, and naming requirements per artifact type.
I’d love more access to the meta data in Dorico, but I don’t really want the Dorico team to put a lot of effort into some kind of mega-export dialogue because I wouldn’t want to enter all the different parameters all the time. Not unless it too remembers all the settings per piece, per artifact type, and per client.
And I kinda like the view you get from the tracking spreadsheet. I run it early on and I have columns in there where I keep track of deadlines, status, etc. Run it again after you’re done, or after any changes. Run it with a flag just for an mp3 submission, or for the whole kit.
Does that spawn any thoughts or improvements you might have in your process?