Dorico Pro 5 ist ein hervorragendes Programm zum Notieren von Musik, die Wiedergabe erfolgt jedoch mit gesampeltem Ton. Das klingt tot, mechanisch und entspricht nicht der Natur der (klassischen) Musik. Ich frage mich, ob es hier möglich ist, KI einzusetzen. KI analysiert die Musik und generiert anhand Tausender Beispiele den richtigen Ton unter Berücksichtigung des vorherigen und nächsten Tons. KI kann einen menschlichen Musiker nachahmen und so ein viel realistischeres Orchester spielen. Welche Möglichkeiten gibt es in diesem Bereich?
Perhaps try Noteperformer? Nothing dead and mechanical about that.
Where do I find Noteperformer?
I think the OP hints at ‘Künstliche Intelligenz’, i.e. AI to compose, not a VST like Noteperformer.
Dorico actually does use KI (or AI in English) for appropriate dynamic balance in several respects though various settings in Playback Options/Dynamics which you might like to check out. The other thing which might arguably be KI in Dorico is the automation in the Expression Maps which can be set to automatically switch between patches depending on the length of the note and can be very effective.
The main determinating factor of whether the library sounds “dead” or not, though, is how well it has been programmed and sampled in the first place and as a very general rule, the more money you pay the better the libraries are. You can’t expect great results with libraries supplied free with Dorico (although Iconica is by no means bad, I would say compared to most of the alternatives from other notation software vendors) and will need to invest some money in getting alternatives if you really want to get a somewhat realistic result. NotePerfomer is quite good at interpreting the musical line because it has a one second read-ahead where it tries to analyse what’s going on in the score but has some limitations of tone (especially in the strings), because of the limited sample depth.
An increasing number of people, myself included, like the combination of the NotePerformer analysis combined with a decent third party library. If you think the work I recently posted sounds dead and mechanical for instance (I mean the reproduction – you can think what you like of the music), then you’ll probably need to do your mock-ups in a DAW and spend a lot of time and money!
OP specifically asks for playback based on AI, not composition.
There is no way that any time soon AI will generate a more convincing performance of individual instruments than current orchestral samples do, simply for ressource reasons. Training data for this is extremely expensive (imagine recording every instrument and possibility it can play several times in several rooms under extremely controlled conditions) and even more rare and expensive are people who have a good enough ear to judge any preliminary results that the model generates. This is not like you can outsource it to large companies which hire thousands of badly paid workers who click on images that show a car all day long but you need people that can judge whether something sounds more like a violin than a viola with correct phrasing and believable transitions.
I’ve recently spoken to the CEO of a large sample developer and he has been approached by a start-up that asked for all their samples to train a model and he said it was lightyears away from being anywhere near usable in spite of feeding it with an already insanely large amount of recorded samples.
I’m afraid you’ll need to rephrase that as I don’t understand what you mean.
Of course it’s undoubtedly true that for the foreseeable future, samples will continue to produce the best results – at least in terms of bearing some resemblance to actual instruments. That’s why NotePerformer developed plug-ins for leading libraries as the combination is stronger than NP’s sample-light approach on its own.
That was targeted towards @PjotrB who wasn’t sure what the Original Poster (OP) meant regarding Artificial Intelligence and I just wanted to clarify that he asked about the playback and not about the composition helped with AI.
My bad, apologies for the irrelevant answer. I simply read the OP too hastily. I can read German, after all… but in this case I missed the whole point🥴 Entschuldigung!
I know, I do understand German. But, they also state that Dorico sounds dead and mechanical, which is something Noteperformer can alleviate to some extent.
Before any misunderstandings arise: sampled sound is fine for many instruments such as piano or drums, but not for wind instruments and string instruments. A human musician puts feeling into this and plays the music as the nature of the piece requires. This is especially important with classical music. Sampled sound lacks this feeling, so a playback in Dorico never sounds like a live orchestra. My idea was to use AI to generate sound for these instruments, based on many examples that the AI was fed with. My question was whether this is possible in any way. So it’s not so much about dynamics, but about imitating the soulful tone of an instrument.
Ok, thank you, Robin.
if AI was able to replace sampled instruments, it would have been done by now. As @RobinHoffmann has said, we’re still a long way off for this for string instruments in particular. That basically answers your question.
Although there are some who argue that Arne Wallander could have developed his largely modelling based NotePerfomer to a higher level of realism, he himself felt is was not feasible and hence the development of the playback engines for sampled libraries which introduce a certain level of what one might call AI to the process. The results with the better sample libraries may not yet sound quite like a live orchestra but it’s far closer than you might think. Samples themselves don’t actually need to be sterile - it all depends on how they’re recorded and edited.
Ok, I understand. Thank you, dko22.
Thank you for your answer. Where can I download NotePerformer?
Surprisingly, under https://www.noteperformer.com/
Thank you.