Artificial intelligence for fingerings and bowings

And Google Translate only works up to a point, after which it may make wild guesses. :laughing:

1 Like

@pianoleo @Derrek @DanMcL Yes I agree that these are limitations which I didn’t take into account.

Yes that’s the problem with DNN’s, they’re smart but not intelligent, we haven’t cracked that nut yet. When they fail it’s often a spectacular failure. Whereas when a human’s wrong, well there’s many exceptions but we tend to fail more gracefully (i.e. less badly - or within the ballpark).

@Janus, could you please tone down your personal remarks a bit? Discussion and even disagreement over musical topics is welcome and well-understood, but please don’t start calling each other names on the forum.


Some elaboration, please. :slight_smile:

I’m not sure there’s much to say, really! I seem to have had a spate of piano reduction jobs recently. Some textures lend themselves to judicious use of the Reduce function (or just copy and paste) and others don’t translate to piano at all.

1 Like

Sorry I did not fully read Piano Leo’s reply, but as I have written my post I may as well leave it.

This is not possible in any real sense. If you look at editions of piano music by expert editors they can have different fingerings: for instance a double trill in a Chopin piece in one famous edition is marked as 14/25 but is probably played 13/24. I was hire librarian for an international publisher and different orchestras had their own set of parts for the same work. I once watched two world class classical saxophone players have a disussion on whether the trill should be a semitone or whole tone.
Once I had a piano lesson with a concert pianist before I recorded one of my pieces, the first thing he did was change the fingering in a semiquaver phrase.
I really can’t understand why people want to replace their years of experience and expertise with AI. And I am going to spare you a rant about software that corrects midi notes when improvising.
(Post edited to correct spelling mistake.)


Aside from the AI issues, an online collection of scores with performance indications by musicians, famous and otherwise, would be a very valuable resource. Many musicians mark in fingering and other indications meticulously. I remember seeing a page from Wanda Landowska’s own copy of the the WTC with every fingering (some showing signs of multiple corrections) carefully written in .

Whether or not AI will ever be up to the task of competently fingering keyboard music, for example, is an interesting question. The rules involved are tremendously complex. But certainly an app that could analyze one’s own fingering preferences from a large sample of one’s own fingering examples and then apply it to other music would be extremely helpful.



That’s a good point. There is this project called OpenScore. They raised over €51000 to digitize and liberate public domain sheet music. They were founded by IMSLP and MuseScore. On the picture above there are some interactive orchestral pieces to be seen and they are available for free because it’s a crowdsourcing platform. And there are even more in the making which still have to completed and supervised. Everyone is welcomed to contribute his own copies to the platform. They don’t need to be complete.

Machine Learning isn’t statistics exactly. So far what’s been talked about is giving it a training set (unsupervised learning) without any criteria or guidance to the machine of what you consider to be good or bad. It has no reason to prefer one over another, it hasn’t been told that phrases are a thing, the limitations and characteristics of a bow, etc… It might take an inexplicable liking to emphasizing A# all the time, who knows? Instead of sounding “average or typical”, without guidance and with so many variables I think it is more likely to sound like an experimental piece that uses dice.

We’d have to at a minimum identify our goals. That’s sounding kinda hard here to come to agreement on. :slight_smile: but I suppose we could have different goal sets to choose from.

Where might it obtain data on the physical aspects of using the bow at any moment (length of the bow. speed of the bow, etc.)

IF it worked in an acceptable way… I think you’d want to run it in the cloud though using something like Tensorflow - you’d want substantial computing resources and it would make much more sense to me to run it outside of Dorico like one of the mastering services. Or how Siri or Alexa work with a cloud connection.