Possible AI usages for Dorico?

I’m not sure how this will be implemented, but it would be crazy if there could be AI in Dorico that could analyze your melody, for example, and auto-generate dynamics (soft to loud, crescendo, decresendos, whatever) for the melody. Damn, something like that would save so much time and money too.

Then from there, tweak, customize, playback to test, done.

Print.

Give to hooman.

Get drunk and move on with life. :tumbler_glass:

There’s any number of different ways that you could assign dynamics to a given melody.

Dorico already has pitch contour emphasis and polyphonic voice balancing.

2 Likes

i know. But I still want my life to be a lot easier.
:grin: :victory_hand:

I have a hard time imagining an AI accurately guessing an artistic choice like dynamics. I could see it cleaning up a piano part played in via midi (splitting hands, assigning voices, quantizing starts and ends).

6 Likes

All that kind of stuff should have been already taken care of in Cubase. In my case, notation programs are not the first in line for my work. The DAW is, so for those who have a different approach, then yes, what you mentioned is also very useful.

AI analysis can be very good, but not perfect. That’s why originally I said it would be cool to have dynamic lines auto-generated, with " p, mf, ff ", whatever, all auto-placed between the dynamics lines, where you can then tweak and customize to your real intent. As opposed to starting from scratch, which is a real time-waster for me, because I have other more important priorities to take care of.

Unless of course, MIDI export from Cubase allows for expression automation data to be converted into dynamics upon Dorico opening it up. As of now, I don’t think that’s still possible.

Muse Group is hard at work promising AI tools for MuseScore 5 to be released last year. Uh… this year, maybe? I took a meeting with them at NAMM in January on this where all sorts of vague promises were made.

Since it will be free as they push it to be the de facto notation app for the world’s largest sheet music publisher, we will all get a chance to try it and see if AI actually enhances the user experience.

Not holding my breath… neither will be among the crowd clamoring for it in Dorico.

2 Likes

Ah interesting. Haha well yeah. Whether if its AI or non-AI, the same principle applies…feedback and research consolidation. So yeah I can understand why you’re not holding your breath in more ways than one. Haha.

I’m seeing two if not three pretty different ideas being discussed here.

(1) To auto-generate dynamics from a melody alone (from the mouse/keyboard?), without any additional context, intrinsically requires that Dorico decide for the user what is a “loud” or “soft” melody. (This is how I interpreted your original post.) I don’t think I’m alone here in saying not only that this isn’t workable, but indeed that is an aspect of AI that I do not want to see in Dorico - but then, I’m not keen on generative AI anywhere.
(2) Unless you’re entering the melody using a MIDI keyboard, in which case I can see dynamics being added based on how loudly you’re playing on the keyboard - but then we have additional context beyond the notes alone. (This is what @drewnichols was referring to.)
(3) If you’re importing from a MIDI file or a DAW, then the question of reproducing the dynamics of the sound file does arise. (This is how I understood your reply to @drewnichols.)

Possibilities (2) or (3) I could see worked out, but the key is that we’re not just analyzing the melody on its own.

2 Likes

No you’re absolutely right. How is AI going to analyze a melodic line and assume what the dynamics are, I have no freakin idea. Haha. For those who are solely using notation programs alone, I’m sure they’ve got their quick ways of getting this done.

But the problem is, right now for DAW users like me, in which notation programs are just merely a task to give a human player a sheet of music (while I have to worry about other more important stuff), MIDI exports from Cubase, for example, is still not able to factor in the volume, modulation and expression automation in my melodic lines in Cubase. That’s where the real accurate intent is. So in terms of “accurate intent” for dynamics, it would be great if the MIDI export function itself could have this data, then Dorico could possess the capability to automatically convert the MIDI automation data into dynamics. All visually complete with hairpin lines, text (p, mf, f, fortissimo, etc etc). Then allow us to further tweak if needed.

Not sure if all that is still possible, even now..

Talking about adding AI to Dorico presupposes that this is where the strength of the Development Team lies, or that they can easily integrate such programming capabilities into the current Team.

Even if one supposes that this is possible and desirable (which at some point it may well be), that begs the question where a (legally accessible) existing Large Language Model and data exist for the AI to plug into to teach itself the techniques desired.

3 Likes

Give any two humans the same sheet of music and they will play it differently, even with detailed dynamic marks.

6 Likes

Precisely, so it’s not like requests or wishes like mine are unrealistic.

Yes, but overall they still need the basic guidance of what is overall needed, before their individual interpretations, right?

Not necessarily unrealistic, but perhaps impractical at this stage.

Yes. That is precisely what the written dynamics give them.
Also remember, the same players will play a piece louder or softer depending on the performance venue.

Welp, like I said, it doesn’t have to have sixth sense or perfect. Just a sensible gauge to auto-generate, in which then we can tweak.

In Cubase, in the MIDI Export function window there is an option to also export the Automation. It only makes perfect sense that Dorico should then be able to convert the automation data into dynamic points and generate hairpins with text. Now I know it’s probably easier said than done. But still, wouldn’t this be such a great help to many DAW-based folks?

In my line of work (if ever), it’s usually in a small studio for individual instruments. For large orchestras, I already know I need that kind of sound so all of this is already pretty much decided before the sheets are even printed.

I deal in final recorded products, not live sound, so volume concerns are irrelevant for my case.

Frankly, if the music is to be played by human performers, this ‘mere task’ should be somewhere at the very top of your priorities list.

Eh, not really. I’m a film composer. My main priorities are ensuring many things to do with picture and story, and then guiding the musicians (if any) how they need to perform. I’d also rather not conduct, and would rather be in the booth with the director in case the director wants last-minute changes or tweaks to be made. Sheet preparation is a time-consuming task I’d rather leave to another hire, or if I can’t afford to hire, then it should be much more of a breeze for me to handle it alone. But as of now, it still isn’t.

Delegating music preparation is a perfectly valid way of giving it its due priority, I think.

What I took offense with in the quoted bit was what I perceived as somewhat of a disparagement of the whole process of preparation, which, by profession, I consider vital on the way to the success of the music. My apologies if I misread you in that regard.