UI responsiveness is really bad

Sadly almost every action that alters the score is very slow on my machine. The CPU is mostly at 35%.
My rig: Intel Core i5-4460 4x@3.2GHz, 16GB RAM, Samsung SSD, Windows 10 Pro (August Update), Steinberg UR22, Nvidia GForce 960GTX, 2xFHD Screen

Edit: Now I tried to create a new project and ist running fine, not really smooth but you can work.
It seems that imported MusicXml files slowing down the application.
The Moment I finished a test sheet, Dorico crashed and now all is gone… Is there a backup folder?

I just want to add my voice saying the interface is painfully slow.

I have a 32 core machine with 64GB of ram. well beyond the requirements.
Things are very slow. Especially in larger scores.

I had Modern Orchestra template, put around 20 notes to Flute 1, then pressed undo 20 times, each undo took around 0.5s, which is quite slow. I’m using i7 5820k, 64GB RAM Windows 7.

Octave shift!.gif
Here’s an example of selecting a bunch of notes and pressing Shift + Up once to move notes up one octave and then Shift + Down. The gif in the attachment shows EXACTLY what happens on my machine, obviously I didn’t slow it down or something… :confused:
The notes WERE imported from Sibelius…

That’s kinda a bummer… :neutral_face:

Cheers,
Benji

Update: Just tried same with a tiny score input from scratch. Same result.

The fact that repitching or transposing a series of selected notes is certainly not ideal, but it’s an issue well-known to us (I’ve explained in detail elsewhere on the forum why it’s slow at the moment) and it’s a high priority for us to optimise, along with a number of other improvements to the responsiveness of the interface.

This is quite quick: The same Action takes on my system 2-3 times more time…

We have already made quite a few improvements to responsiveness during note input and navigation. There’s plenty more scope for further improvements as we progress.

Same issue : important lag when moving up ou down dozen of notes with the ALT key on very very small project. I have a XPS15 9550 with i7 and 16Go. Disappointing.

It’s very early days - this is the kind of common use case that we will get round to fixing.

My opinion: When responsiveness gets “improved” then it will never be optimal. Responsiveness should be a result of optimal design from ground up. When responsiveness is slow then IMO it’s a result of bad design from ground up.

State of the art computers and intelligent multi-threading today allow to have instantaneous responsiveness - regardless of what the program does in the background.

Make this simple test with Windows Task Manager:

Place a program which has a good threading design and Task Manager side by side on your monitor(s). Then perform an action in this program which requires many parallel threads: You can see in Task Manager that the number of threads in this program quickly increases because it uses a so-called well managed “thread pool” where many computing steps are being carried out in parallel threads.

Then make the test with Dorico and move a couple of notes up and down: You can see that the number of threads in Dorico does not increase which means that Dorico does not use parallelization a lot.

Also getting a lot of lag here. Adding players takes a few seconds (it seems to be waiting for Halion to load samples each time). Taking players in and out of flows takes a few seconds (and feels somewhat buggy). Switching between write/engrave/play modes takes 1-2 seconds. Using arrow keys to move left or right take about half a second. Saving takes 6-7 seconds. Clicking and dragging to make a selection is extremely slow. Overall everything feels very slow.

OSX 10.10.5
2 X 2.93 GHz Quad-Core Intel Xeon
32 GB RAM

Hmmm to be fair, I don’t think that the performance problems are because of lack of parallelization. I’ve done that test in my OSX, and using htop I can see clearly how the load is shared between my 4cores in my i7.

I remember in my company how the performance of a specific app was horrible, and the solution was as simple as changing a list for a hash table. So sometimes, horrible performance can be improved.

But sometimes as you said it can’t be optimal without a complete re-architecture, that’s true.

I agree that it is a bit scary that it is SO slow with a super-simple score, and that it is absolutely not possible to use it with large scores. But let’s wait until the first patch and see what happens. I think that the team is going to surprise us!

In fact, Dorico uses parallelism quite extensively. What is very difficult though is that much of music layout is an inherently serial problem. There are some aspects that can be done in parallel (eg computations on rhythms on independent staves), however there are many ‘synchronisation points’ which forces things to be serial again. Simple example: adding a single note in bar 1 may cause a bar to be pushed onto the next system, and that can have a cumulative effect that causes the entire score to be reformatted. You therefore have an effectively serial problem (you can’t work out the layout of bars in parallel because they are affected by those that go before it. This is a very difficult computational problem.

Dorico was written from the ground up to benefit from parallelism where it can usefully be applied. Every edit you make to the score may be processed in a large number of threads if there is a benefit to doing so. We try to keep all work to other threads to keep the UI responsive.

Also, do remember that Dorico is doing a huge amount more work than any other notation package in order to apply the large number of rules, and resolve all the collisions, route the slurs, etc, and there necessarily a computational cost to this.

Performance is something that we will improve constantly. Many of the current problems are things like dealing with cache invalidations which currently are quite pessimistic to ensure that the appearance is correct, but which can be optimized quite easily once we’ve verified that it is safe to do so.

I am confident that Dorico will become a standard for musical notation.

BTW, does Dorico use the computational power of modern graphic cards which can deliver incredible speed? Not only for graphical rendering but also for computation. Several of these “consumer/prosumer” cards can get you over 5 TeraFLOPS!!

We currently aren’t able to harness the power of the GPU (more’s the pity) because Dorico’s computations are not purely numerical - they are based on a complex object model which isn’t in the form that can be processed on a GPU.

Paul, thank you for the information.

If it would be possible to break down the implementation of the object model to computational elements then these elements could be computed with OpenCL.

Indeed - it’s just very unfortunate that the problem doesn’t decompose in such a way. Oh well, roll on the general-purpose GPUs…

I am just a simple learner and you made me curious: Why the implementation of an object model could not be broken down to computational elements?

It’s because GPUs can do pretty much only one thing: maths. When doing 3d graphics, most of the computation is matrix transforms of thousands of points in parallel, which is ideally suited to the massively-parallel computation on a GPU. For our model, you have a very complex hierarchy of structures that represent pitches, ties, notes (several thousand classes, in fact). This kind of model can’t be implemented on a GPU because it needs a more ‘general’ CPU.