HORRIFIC performance of Dorico 3

Are you talking about the average percentage use across a few cores, Stig, or the use of individual cores? Dorico processes in parallel when it can, but there are certain actions that can only utilise one or two cores even if you have 16 cores.

Well, it looks like the work load is distributed gradually over the 8 cores, so core one is working a little and core 8 is barely awake.
Ex. in a full score layout I change the default quarter note space from 3 7/8 to 3 3/4, I will assume that this will force Dorico to recalculate the Layout, but it doesn’t affect the CPU use.
So checking the CPU use in todays session shows that the CPU has been working average around 15%!!
I’m fully aware about the single/multi core problems from my work with DAWs, but I still find it weird that all this waiting - sometimes - is not reflected in the CPU use.
I wish someting could be done.

That is a classic example of a “sequential” process. You can’t calculate the spacing on the second system until you know how many bars are on the first system. Repeat till you reach the end of the score.

Yes I know, I’m just puzzled that it’s NOT reflected in my CPU - meter!
In Logic you can have a channel strip with a lot of plug-ins, and LPX will process all plug-ins on one core, resulting - sometimes - in overload, which is visible in the CPU-meter.
But maybe it’s different with Dorico.
I consider the new Mac Pro for my studio, but I wonder if Dorico will benefit from a monster machine, when my Mac Pro 8 core hardly show any reactions to the layout recalculate process?

In those parts of the calculation that are necessarily sequential, as Rob points out, it doesn’t matter how many cores you have, only one will be used. You have 8 cores so it will only be using 10-20% of your total CPU capacity during that phase. The performance characteristic of Dorico is completely different to that of a DAW since that can effectively parallelise the processing of each independent channel - you just cannot compare the two.

If you have a score with a large number of players then watch the activity monitor as you open it. You should see that there are several phases to the file opening. Some will just use one core, some will use all. There are some operations such as player and layout changes that currently result in a lot of things having to be recalculated. We are acutely aware of those and we have plans to improve them, but these are very substantial changes, so we can’t rush into them.

Thanks a lot. I will look forward to these changes.

Similar problem here. Running on PC (W10) I7-4770 overclocked 32gb ram. The Dorico project is 300 bars 49 pages A3, 20 instruments with lots of condensing. It looks great and condensing is a great feature, but it’s so slow on this computer that it’s unworkable. When doing corrections even in scroll view, it’s still several seconds between each action I can take.

It might be worth noting (or maybe not) that there is no virtue in having single-threaded process bounce from one core to another, just so the usage looks level. On most chips, that is actually less efficient (as much as 3-5% less efficient) than running everything on the same CPU because the multi-level memory system has to work harder. This has been a long-standing wrestling match in the computer architecture and operating system worlds. Affinity generally is more efficient, up to a point – but where is that point?

Customers who paid big bucks for their shiny new 13-core computer chips want to see all the cores used, so often the affinity is relaxed more than it should be.

In a utopian world, all algorithms could be designed to allow high parallelism, but this is hard work. It is relatively easy to have VSTs and VSTis run (mostly) in parallel. Not so easy for the situations like Paul described.

As a practical matter, it comes down to this: would you rather have the vendors spend most of their development resources designing exotic algorithms for parallel processing, or would you rather have useful new functions for your applications? If a problem can be solved by faster machines (at a reasonable cost), I think most of us would be better served by upgrading and having the vendors continue to put priority on useful functions. Of course, some functions simply won’t work on even the fastest machines without very clever, exotic algorithms. So it really is the classic balancing act.

It seems to me that the Dorico/Steinberg team has been doing a good job trying to achieve such a balance, and we all need to do our part by budgeting for computer upgrades from time to time.

There are specific challenges with ensuring that edits to notes etc. when condensing is enabled can be quick. I’m sure I’ve written about this at length before (probably even in this thread), so please do have a look at my previous posts on this topic if you are interested to know the technical details. Specific optimisation tasks have to be undertaken for condensing, and the good news is that we have some improvements in this area lined up, but you must be realistic about the prospects of working at the same speed with condensing switched on and switched off. Dorico is doing much, much more work, and unfortunately work takes time.

Thanks Daniel ! I imagine that it would speed things up to divide larger projects into separate Flows. Would that help?

Not necessarily. The only way in which splitting the project into multiple files will help is if you have flows starting on the same page as the previous flow. Provided each new flow starts in a new frame/on a new page, then edits in one flow won’t generally cause the other flows to be recalculated.

I don’t know what’s going on with my machine and Dorico. All I’ve worked on so far is a single piano score with condensing turned off. As I add a certain number of notes to my piano score, everything slows down. Shifting the beat position of notes/chords starts to take seconds instead of milliseconds, and deletes do as well. I have a MacPro 12 core machine 32 gigs of RAM and an NVIDIA GeForce 980TI. Granted it’s a 2009 5.1 model, but all my other software runs blisteringly last.

That’s certainly not the expected behaviour, so the good news you should be able to fix it. Check whether you’re creating a MIDI ‘feedback’ loop with the Mac’s IAC bus. (Virtual MIDI channels from one application to another - if Dorico is sending MIDI signals to all other apps and then receiving those MIDI signals as well…)

Thank you benwiggy. Ah, I hope that’s what it is but I’m skeptical. I downloaded the Midi Monitor that Daniel recommended and hadn’t seen any messages. Per your recommendation, I disabled my IAC driver nevertheless and we’ll see if performance is better. I do have hope that this is an easy problem to fix, it seems like it should be!

I’m using Dorico 3.5 to make a hymnal that will have upwards of 200 songs. Each of the songs requires fine-grained manual adjustments to spacing. I am only half way through, and was wondering why my progress seemed to be slowing down, when I started a fresh project and was startled by how snappy the response was. The gradually increasing sluggishness of the songbook didn’t really register until that comparison made the difference depressingly apparent, each micro adjustment in note spacing takes 1-2 seconds now, and so with hundreds of adjustments necessary per song, it has really become a time hog.

I thought I could circumvent the problem by making the adjustments in a new project and then importing that flow into my songbook, but unfortunately it looks like Dorico discards all my manual spacing when the flow is imported, so I am back at square one.

It seems like these kind of adjustments should be performant regardless of project size, since I have frame and system breaks which isolate the adjustment to the individual system or page at most. The condensing option was mentioned previously in this thread as a potential performance culprit, but that is not enabled in my project anyway.

I tried uploading the diagnostics report, but my zip file slightly exceeds the maximum size allowed (4 MB). Help!

Most of my work in Dorico is hymnals. I’ve completed nearly a dozen hymnals using Dorico for all the notation, some of them quite large. This isn’t what you want to hear, but I don’t advise doing the actual hymnal in a single Dorico file. One of the biggest reasons, apart from the performance, is that it’s too easy to lose those micro-adjustments you’re talking about (and I definitely know all about those…). It’s just too risky.

My workflow is one Dorico file per hymn, export as PDF, and use InDesign (or Affinity Publisher) to create the final product. Any changes to the Dorico file will update the PDF automatically.

I’m as big a Dorico fan as anyone, but this is about the right tools for the job. A hymnal is the sort of thing I don’t know if I’ll ever do in Dorico. Maybe!

1 Like

Thanks for the feedback. I do have InDesign, so I guess I’ll try that route. For situations where two hymns share a page, I suppose setting the full page or spread in Dorico would work just as well, or would you still recommend one hymn per file?

Always one hymn per project!

InDesign makes short work of this sort of thing. My two cents.

When I save a PDF copy of a Dorico file in the same folder as my Dorico file and then go on to update the Dorico file, my changes don’t get saved to the PDF file. Does using InDesign or Affinity Publisher help to make this happen or am I misunderstanding the gist of what you are saying?

You have to re-export the PDF after making changes. I’ve assigned Export to a key command, so it’s pretty fast. Then the InDesign project recognizes the PDF has been updated, and alerts you accordingly.

1 Like