Does “Global Analysis” make massive use of prallel processing of the data?
As it seems, it does not, though. Could this be implemented in the next version?
For example: Finding Peaks could be parallelized easily, imho.
Does “Global Analysis” make massive use of prallel processing of the data?
As it seems, it does not, though. Could this be implemented in the next version?
For example: Finding Peaks could be parallelized easily, imho.
The “Global Analysis” makes use of multiple cores to find true peaks: one core per channel. True Peak finding is indeed the most CPU-consuming task because it requires expensive filters.
On the other hand, the more complete Visual Analysis tools make full use of Processor cores (one core per type of analysis and audio channels).
Finding “Digital Peaks” could be easily parallized, right?
There is no interest at all for this, because this process is much, much faster than simply reading the file. This means reading the file is a bottleneck. True Peak is another story.
I disagree. Imagine a 10 hour or more recording. Some tons of gigbaytes of data… This could be done very well in parallel.
At least on my part, there is an interest in this.
And yes, I wonder, why this (i. e. parallel computing) is not standard in 2025 in the market leader of audio editing. Why not work with the best solution possible?
Hmm, ok, if it calculates the digital peaks while loading for the first time… okay, never mind than.
Sorry, but apparently you are not a DAW programmer
If it takes 30 seconds to read a 10-hour file, and 1 second to find the peak in these 10 hours of samples (in memory) using a single core, then there is no point in trying to reduce this 1 second by using multiple cores.
No, I am not a DAW Programmer, I am a mere low profile user of very sophisticated tools
.. na, ok, not that a big deal - true. It’s almost fast enough already, so not on the priority list.