Metadata and Scripting stop gap

I read a couple of threads, and think I understand the development team’s position on the “when” of exposing a scripting API. It makes sense.

But exploring from the console - LUA still seems to be LUA which seems right and normal. So… Would it be possible then to document the object/variables Dorico uses for the project’s meta data? We don’t know what you know, but it seems reasonable to expect fields like title, composer, artist, comments, etc. won’t change very much?

The scenario I have it mind is for production music: Execute a LUA script to export the audio, then calling out to an external program like Kid3 cli to auto-update more complete ID3v2.4 metadata in the file, export the meta in a CSV, or embed metadata in the file names so that a Music supervisor can import it into Sound Miner or whatever. The “real scenario” is me not having enough minutes left to meet a deadline :slight_smile:

Greg

You can trigger any command that you can execute in the user interface via the existing Lua API, but it’s not documented and nor do we plan to document it until such time as we are willing to say that the API will not change. If you use the scripting capabilities at the moment, you do so at your own risk and in the knowledge that we do not guarantee to maintain compatibility with existing scripts in a future version. I believe your plan will be scuppered in any case by the fact that you can’t confirm a dialog via the Lua API, only trigger the command to show the dialog.

Thanks Daniel, The dialog limitation is small compared to having to update metadata in multiple places.

I wonder if you have done this? I ran into the same situation with Dorico Export Audio recently.

It seems there hasn’t been much discussion about metadata or ID3v2 tags for Dorico. Tags which would normally be created in an Export Audio feature.

I have only recently had to learn about metadata fields in mp3/wav to submit tracks formally and I quickly see that the situation is a minefield of incompatibility: ad hoc information fields which are not very standard at all (meanwhile much advice for “you do it like this” is actually only useful and/or data-compatible for Protools: specifically metadata like, “Add a well-written Description to the track to summarize the music”, when the Description ID3v1 / ID3v2 tag is not standardized in that way at all).

Export Audio (MP3 or WAV) with Tag information directly from the Dorico project would be the ideal solution but is probably an unreasonable or impossible request, as again, the ID3v2 tags are so ad-hoc that such a feature can’t cover many situations. The workaround would be some type of external hook-script as “add on” to the Export process.

The workaround I have used is a set of command-line utilities (which are, remarkably, all terrible, in relation to posix standards). I manually run a shell script on the exported MP3 from Dorico. This requires modifying the script to set the correct Tag info, for each project and/or each Export (an error-prone process; whereas, Tag data must be absolutely correct, no typo’s allowed, no incorrect Titles, etc… metadata errors can result in a lot of confusion and even ultimately, missing royalties $$$).

To make this Tag workaround an automatic process rather than manual, I think I would set up a Finder Automation folder such that Export Audio from Dorico, into the output folder, would result in the Automation triggering to run my custom Tag shell script. However the only way to get the correct information (Title, Author, Copyright, Composer, etc etc) would be directly from the .dorico file… (requiring XML parsing, a much heavier utility than a few simple ID3v2 utility commands; it is possible, but also unsupported).

Perhaps Dorico could include a standalone commandline utility to output .dorico file information from Dorico files (i.e. “DoricoProjectUtility --print-copyright --print-composer --print-bpm myScore.dorico”). Then, an external script could use this utility to read the Project Info from the .dorico file, and ultimately use the information to set Tags.

It really is quite important to make this process as automatic as possible to eliminate tag errors and metadata typos, for reasons mentioned above. (Note: the BPM tag is especially problematic.)

@superblonde We must be birds of a feather… :slight_smile:

Not just tags - the rest of the problem for me was the need to generate stems, alts, and cutdowns (cuts) according to the specific file naming conventions and audio standards a given client has for each of those artifacts.

I ended up writing a (mostly) automated production pipeline in Python. First step is to have Dorico (or a DAW) export its master mix and each track as a high quality.WAV in the normal way with no special parameters to remember.

Actually, I suppose that is not the true first step. I have a spreadsheet that contains a row for the preferences of each client. I also find I can default into that spreadsheet the values of certain tags like the composer (me), default split, etc. I also store my pro, IPI#, and contact info, which may later be assembled into the comment tags or file names based on whatever the client wants. This is a one time setup per client though - so for any given piece, I export audio from Dorico and just specify the client when I kickoff the script. I probably should make that one combined step from Dorico at some point.

Script reads the correct row from the client spreadsheet and the previously exported audio file name/directory structure, and uses that to first create an entry in a tracking spreadsheet before stopping (the first time) This first run I call seeding,

I need to keep track of the pieces anyway, but from this point on the information it contains will be used to control the process - so I need to make sure it is right, and fill in any gaps.

For Dorico’s standard directory structure of “flows from xxx, it will have already inserted a new row with xxx as the album name and the flow name as the track name (after cleaning them off a bit) with the aforementioned client preferences and defaults.

I too thought hard about exporting music xml and reading the Bpm, but decided it wasn’t worth it right now. I enter bpm , mood, genre, etc. as needed in the tracking spreadsheet and kick off the script again.

For stems, I have an automatic (configurable) way of grouping tracks into stems based on their names . One reason I used python was the ability to mix those audio tracks together and create new stem audio tracks, Grouping into stems for strings, brass percussion etc are sort of obvious and don’t change a lot - if you have an oboe - it always goes in the woodwinds stem.

Alt mixes are done using similar configurable groups based on track name, except I use a track/instrument naming convention of my own to single out which are solo or prominent melody tracks so I can have the script create underscore alt mixes, and not just the more obvious ‘percussion and bass’ alt mix or whatever. It’s also how I deal with electronic/synth tracks that may fill different roles.

I can explicitly edit the stems and client requested alt mixes for a piece in the tracking spreadsheet. Mostly though I use the preferences that come from the client spreadsheet untouched. I can reseed the tracking spreadsheet if the client changes the standard, or if I wrote a piece not knowing which client might pick it up.

Cuts are another reason I had the script stop first time. Say you have to deliver the master plus stinger, 15 second and 90 second cuts. I enter each of those using time codes in the tracking spreadsheet. . Again, python can take a snippet of the audio say at the beginning, and concatenate and crossfade it with some other section like say the last bar to create one of the cuts. I built it so I can specify the location(s) and if needed the cross fade - but generally the default crossfade behavior is seamless.

What I do is create hit points in Dorico, and transfer those time code values into the column for a given cut in the spreadsheet as start-stop;start-stop. If we could READ the hit points in Dorico - that would be awesome, and if we could get all the meta data fields… but we may be talking a niche need here.

Run the script a second time and it creates however many masters, stems, alts, and cuts outputs it needs in whatever audio formats, (including aiff) sample rate, etc. with whatever tags, and naming requirements per artifact type.

I’d love more access to the meta data in Dorico, but I don’t really want the Dorico team to put a lot of effort into some kind of mega-export dialogue because I wouldn’t want to enter all the different parameters all the time. Not unless it too remembers all the settings per piece, per artifact type, and per client.

And I kinda like the view you get from the tracking spreadsheet. I run it early on and I have columns in there where I keep track of deadlines, status, etc. Run it again after you’re done, or after any changes. Run it with a flag just for an mp3 submission, or for the whole kit.

Does that spawn any thoughts or improvements you might have in your process?

That is a very thorough music-making process.

I haven’t had to tackle combinations of alternate stems/mix yet. My only post-Export mixing step is to run a Normalization utility on the master mix .WAV so that I don’t have to artificially boost the compressor FX or worry about volume vs. dynamic range (I scripted ffmpeg to do this, in a two-pass run). When I start having to deliver stems, I will probably have to also Normalize the Exported stems by the same dB, to match the master mix.

For the cut-downs, if Rehearsal Marks were placed in the score at the cutdown points (or Section Names, when those exist in the future), then a Steinberg utility like my hypothetical DoricoProjectUtility could output the timestamps from the .dorico project (i.e. “DoricoProjectUtility --print-rehearsal A myCue.dorico” would output the location of [A] in bars, beats, and mm:ss format; repeat the same for [B] and then create the start;stop from those timestamps) . Repeat that process for various locations in the score, to create 15-second+stinger, 90-second+stinger, etc. These locations could probably be found by an advanced custom tool looking into the .dorico XML directly now, but that could be trouble since it is not supported.

Since you already have your custom utility, I suppose the way to make it automatically “triggered” is to have it always-running and watch a given directory for new Dorico Export Audio files, then it could run the processing on the audio file immediately. A similar final result to having a LUA hook.

It’s surprising that you are creating the cut-downs with an automatic crossfade/concatenation. I wouldn’t have guessed to do that.

It is probably important to directly mention the incredible time-savings that these automations provide; in addition to eliminating lots of typo/errors ($$$). A typical musician-composer/engraver might work on a single score project for weeks or months. For these Cue projects, the work is for writing one, two or even three complete new scores (or, “new” as meaning, variations on themes at least) per day during work crunches, and delivering finalized music files within days as well. So it is important to streamline administrative-type work because there are so many new .dorico’s, new export WAVs, with specific filename needs for delivery, and all sorts of little details, etc etc.