I routinely test performance of projects in various DAWs, and have been doing so for years. Here is an example test I use:
I create an Omnisphere instrument track that plays a pair of 4 bar notes. Omnisphere is loaded with the attached multi which is designed use a huge amount of cpu. I then duplicate that track and see how many tracks can be played simultaneously before a system overload occurs. The more tracks that can be played, the better.
MacPro late 2013 6 Core 3.5 GHz
16 GB RAM
32 bit audio engine.
Cubase: asio guard on and set to high to use the same conditions as Logic uses for buffers.
Select an empty track when performing the test.
Performance Test Results:
Multi with only one part and one layer per part in order to use less cpu per track:
Cubase 10.0.20: 85 tracks
Logic 10.4.4: 86 tracks
That’s right. Both Cubase and Logic show the same results. They have shown the same results in my tests for years. A couple of points to keep in mind:
This is not only unsurprising, it is to be expected. The DAW usually plays hardly any role in performance of audio playback of sessions. CPU load is usually dominated by plugins. That means when you’re measuring performance of a plugin-heavy session, you’re measuring the performance of the plugins, not the DAW. If you compare the same plugins in two different DAWs, you’re measuring the same thing twice. The DAW is responsible for assigning the buffer sizes and spreading the load across the cpu cores, but that is generally the end of the involvement of the DAW in anything related to performance. The amount of DAW code that is executed in a test like this is tiny, so even if it was inefficient it wouldn’t make a measurable difference in the test results.
The DAW is responsible for buffer size and buffer size can affect performance. However, in these days of hybrid buffers and asio guard, it is not a simple matter to keep the buffer size the same across the DAWs, but it is important to design your test to do so to ensure you are measuring apples to apples.
This test measures raw CPU load. It specifically excludes Omnisphere presets which would stream samples from the hard drive, thus avoiding any I/O issues. I also run streaming performance tests, but like cpu load, the streaming performance is the responsibility of the plugin, not the DAW, so if that test shows different results in different DAWs, that’s again due to some difference in the plugin, not the DAW.
I am aware there are Macintosh tests that show performance differences between DAWs. While such a test may have uncovered a significant difference in DAWs, there are a number of other possible explanations. For one thing, the test may be comparing AU vs VST and perhaps there is a difference between those two for the plugin used in the test. Or perhaps there is some condition that is different between the two tests, for example the buffer size, or some system resource difference such as the amount of available RAM. Another factor is that some tests use VEPro. VEPro is a unique animal among plugins and works in an unusual manner. If you use VEPro, then a test that uses VEPro will be meaningful for you. If not, then it is probably irrelevant for you. Another possibility is performance is being measured incorrectly. For example, the system activity monitor is not a good measure of performance in tests like this, so a test that relies on that monitor could give misleading results.
A) This test shows no measurable performance difference between Logic and Cubase for virtual instrument-heavy projects.
B) If someone claims to have a test that shows a difference, I will believe them. But I would be skeptical that any such difference could be attributed to a shortcoming in a DAW. It’s certainly possible the difference can be attributed to the DAW, but, since that defies logic, it is more likely that there is some other explanation for the difference.