How slow/fast does Dorico 3 perform on your Mac?

Hi all

To clarify, I use an SSD as my system drive.
I have not tried using virtual instruments outside Dorico. I use the standard Halion setup that comes with Dorico.
My sessions are usually quite large (big band to orchestral)
I figured that smaller scores perfom better than larger, which makes sense.

To be honest, i didnt want to start a complaint-thread about Dorico’s performance but much more a poll about other peoples experience, so I can judge wether my setup is way under-performing or quite middle-of-the-road.
It appears it is the latter. Which is great on one hand but also a bit frustrating.
I wish i could just buy some ram, or a new GPU and double my Dorico performance. :slight_smile:

It feels though, that if there is a price to pay for all the automatic, behind-the-scenes calculations to create collision-free layouts and previews etc, than a stronger computer with more computing power must bring some kind of gain in speed.
If it is really just calculations per second, then a better calculator must bring better performance.

Curious, does anybode own a imac pro or even the new mac pro? Those computers are so much more powerful than mine. Just wondering how does dorico perform on those machines…

Thanks a lot for the input.
Marc

No worries, I don’t think anyone got the impression you did.

Despite room for improvement in the speed of certain things, I find that it’s far outweighed by time saved in ease of notating, compared to other apps that routinely made me want to slit my wrists :wink:

Well the new mac pro isn’t even officially available to the public yet, so if anyone here is testing it, I doubt they’re allowed to say so publicly … :slight_smile:

My 2018 Mini (3.0 Ghz 6-core i5) has a CPU that’s about twice as powerful as your 2010 Mac Pro (Geekbench scores of 1038 / 4848 vs. 573 / 2138), but I doubt I’m getting twice the speed. After I moved up from my previous 2012 Mini (2.6Ghz 4-core i7; Geekbench 736 / 2832), Dorico felt ‘slightly’ more responsive, particularly when also running Logic and Xcode and Finale and Affinity Publisher and Creative Suite, etc. But certainly not 1.5 - 2x as much.

I suspect that 2-core CPUs are slightly less responsive because those 2 cores are queuing instructions for Dorico and everything else that needs doing, but otherwise: CPU power is not the limiting factor on Dorico’s performance. Also, upgrading my RAM from 8Gb to 32Gb brought an improvement to Dorico’s performance, possibly because of improved overall caching into RAM and lack of swap.

More generally: I presume this isn’t just a Mac issue, and that Windows users see similar?

We run automated performance tests on Windows and macOS every single day as part of our continuous integration process, which allows us to see as soon as a change introduced the previous day has any kind of tangible impact on the software’s performance. The performance on the two platforms is comparable, though not identical, and there are too many factors involved in arriving at a set of performance characteristics to say definitively whether one platform is faster than the other.

Dan, my point wasn’t that people should use this method; just that if it was possible for the team to encode the program in such a way that this setting could prevent unnecessary heavy lifting all the time, then perhaps one could activate it to have performance gains during note entry (assuming you cant use galley view—which sometimes I cant due to lyrics) and then switch it off once you were ready to do final engraving. As I said above, it’s fairly simple to figure out what page you’re on when the metrics are fixed. As a result, Dorico wouldn’t have to recalculate all pages, just the one being edited. (And, if anyone is curious, I do use fixed casting off fairly regularly, however I do not work on symphonic scores.)

Dorico already does that wherever is possibly can, whether or not you’re using fixed casting-off.

I really doubt the GPU will make any difference in Dorico note-input/engraving.

I didn’t expect any kind of personal answer, nice or not. But I would like to be assured that Dorico recognize it as a problem with a priority and preferably a high one. You know a kind of “We have experienced that many of our more heavy users are having troubles with the pace of our app, when many flows are in use. We are working very hard on this.”

The lack of any kind of serious info about this issue and frankly the little offended tone of voice these discussions triggers, makes me wonder if the problem is that the basic concept in Dorico - the flow system, the way to split the work process in different “spaces” - could be causing the problem?

I ask myself Is Dorico suited for the kind of work that I’m doing now? My answer to this is ambiguous, because during the basic input of the score, with all the clever “behind the scene” processes, I am happy that I switched to Dorico. But right now I’m finishing a score, where I find my self sitting waiting all the time.
So if your conclusion to my video, that due to my use of different layout for many of the pages in my project, Dorico will be slowing down as shown in my video is correct, wouldn’t it be fair to expect that some kind of resource limit indicator (like in Cubase), would create an alarm that told me: “overload, this it not possible”.
The reason why I find it hard to comprehend that Dorico is working really hard is, that my CPU is sitting idle much of the time, hardly reaching 10%?
Wouldn’t you agree that many calculations would be reflected in the CPU meter?

If you have lots of CPUs and Dorico is doing something that has to be done sequentially, it can’t harness the power of all of your CPUs, because, as I said days ago, it can’t know how to lay out page 2 until it’s finished page 1. It may only be using one processor. Are you looking at a CPU usage figure that averages the use of all of the processors, or are you looking at a usage figure that tells you how each processor is being used?

Dorico has many separate processing phases. There are complex dependencies between them. Some can only be done when the results of previous ones have been calculated. Some can be done independently. Some can be done in parallel for all separate instruments. Some can only be done when the results of every instrument are known. As a result you will see that if you load a complex score (or do some operation that results in a big reformat such as changing some layout options), the CPU usage will vary greatly, and will depend on the number of cores. Some phases will result in 100% CPU across all cores for a period of time. Some phases are necessarily serial and so will only occupy one core (so if you have 10 cores this would show as ‘only’ using 10% CPU). Some of these problems are irreducible - you can’t calculate things in parallel when one thing depends on the previous one.

We have stated many times that are constantly looking for ways to increase the performance and as Daniel has noted above, we monitor the results every single day. We know that it’s a big issue with large scores, and we feel the pain of users who are having problems. Unfortunately it’s a very complex problem, and we have to balance the time we devote to it with the time required to implement the features and notations that users ask for. If there were simple things that we could do then we would have done them by now. Many of the things that we would need to do are major tasks that would take weeks.

I know the difference between sequential and parallel processing, and I have a Mac Pro - one CPU with 8 cores. None of the cores are showing extremes in opposite to a Logic Pro session, where a single plug-in will run on one single core, and the extreme load are shown on the graph.
Take a look:Dropbox - File Deleted

You are comparing apples and oranges. The performance characteristics of calculating score layout is completely different to plugin processing. During playback, the Dorico audio engine will also distribute the processing across the plugins you have loaded, as Logic does. This is a typical case where multiple cores can be used effectively However, when it has to calculate the score layout, this is a totally different type of operation where an enormous layout model has to update a huge number of state variables. Many of the parts of the operation cannot effectively use multiple cores because they are dependent phases that cannot be run in parallel, and hence it will appear like Dorico isn’t using much CPU, whereas it will be occupying at least one, and more if the operation it is doing can make use of the extra cores.

Could the layout for separate flows be calculated in parallel if the user has specified that each flow starts on a new page?
Then only calculating page numbers would have to be sequential in that case.
(Perhaps you already do that or have decided not to for good reasons.)

To reiterate what I said above: if there were simple things that we could do then we would have done them by now. We look for performance improvement opportunities where we can, but this area is incredibly complex - vastly more so than ‘can’t you just do ?’

I definitely was not suggesting such an approach would be simple. I have done only very elementary programming in my time, but I know that nothing about it is “simple.” My question was more theoretical than a suggestion.

I honestly couldn’t say as the processing system is outside my own expertise, but this would entail that the construction of the task graph (which itself is a complicated area) would have be dynamic based on the page layout, which would then introduce yet another layer of complexity (and inevitably bugs).

Just to play devil’s advocate: right pages and left pages use different master pages. Most of the time the only difference is the page number, but it’s perfectly possible for the music frames to be completely different shapes and sizes on facing pages. Derrek, it’s not as simple as calculating the page number :wink:

Now I didn’t compared Logic to Dorico, I said that I knew of situations where Logic, processing one heavy channel with a lot of plug-ins, had to use one core because the signal HAD to be processed one after the other. Sequential.

I UNDERSTAND that a layout MUST be done sequential. What I said was, that when I run a huge Logic session or Cubase session, I can read the strain on my CPU and it’s individual cores on this read-out:
Skærmbillede 2019-10-07 17.09.35.png
and the load of the cores shows that my CPU is really working very hard. But when I monitor the load of the cores during a layout update of the project I work on in Dorico - like in my video posted earlier today - they hardly move.
I’m not trying to compare Dorico to Logic, but of course I make comparisons between S… and Dorico.
I will have to keep my customer indemnified for the extra time I spend waiting, while this

enormous layout model, (that) has to update a huge number of state variables

finishes it’s job though.:wink:
And being your customer I think it’s okay to ask you what can/will be done about this problem.

You are right: I had not considered the right-hand/left-hand layout problem. Thanks for mentioning it.

I think it would be generally a VERY GOOD idea regarding Engrave Mode, that when you’re moving text objects around or ANYTHING that does NOT directly affect the spacing of the music on the page, that DOrico doesn’t need to f**** recompute the entire page for 2-3 seconds. That alone (and similar internal adjustments) would be a TREMENDOUS time saver…