I noticed today, significantly more sluggish performance the higher I set FFT size. I didn’t see another post about this. I didn’t find any posts about this when I searched.
My machine is 6 years old (i7 8700K/ 64GB/ GeForce 1050 Ti), so getting a bit long in the tooth.
I did update to 11.0.20 today, worked all day and then experienced sluggish performance
-long zoom in/out times
-long wait times for mutes and solos
-etc
So, I rolled back to 11.0.10 and find pretty much the same performance.
As I long as I keep FFT size below 4000, I get normal, quick performance for mouse scroll wheel zooming and everything else. Set it to over 3072 and it’s finger tappin’ time.
My current job has 27 layers…I made a few more today and maxed my machine…so pared it back. I can work comfortably at FFT in the 2000s.
FFT calculation is time-dependent, more points need more time to get calculated, not because it’s more to calculate, just because it uses other time frames related to the measurement points.
More accuracy in the lower frequency area leads to high calculation times.
Parallel processing can reduce the times.
I’ve made numerous posts about this(at least 10). That is odd that you didn’t find any (unless the moderators/administrators secretly shadowbanned me).
To add to what @st10ss mentioned, I believe the main issue here is that there’s also overlapping going on (which can be cpu/gpu intensive). The scenario of overlapping can be compared to the idea of AMD’s fidelity Super Resolution(FSR) where in order to scale a lower quality image up to a higher quality image they use algorithms that are optimized(keyword OPTIMIZED) to take a lower quality image and upscale it to a higher quality image(like 4K) to give off the illusion that you’re getting the same quality at a lower resolution. AMD claims (Nvidia claims this too) that “gpu acceleration is only possibly if you use our hardware and only our hardware only”(implying that gpu acceleration is only possible through hardware like gpu’s) but I believe that is a lie in order to fool consumers into buying their gpu’s. I believe AMD and Nvidia are using deep-learning and machine-learning and implementing that technology into their GPU’s (they both hint about it but never explicitly say it in their marketing) and given off the illusion that upsampling/upscaling is only possible through hardware acceleration. Nvidia does the same thing where they claim that their “CUDA technology is only possible through their hardware because it is entirely dependent on hardware” and I dont believe that, I believe that they’re just using that as a marketing ploy in order to fool people into buying their GPU’s.
However the main problem here is OPTIMIZATION. Spectralayers needs a overhaul of OPTIMIZATION done in order to make it more efficient. From the transformation process being optimized to be done in real-time, to selections being optimized(where you can have over 5000 selections within the same project and Spectralayers doesn’t become sluggish at all), to higher FFT sizes and resolutions and refinements, to unmixing levels being optimized to preview in real-time.
The only thing that I’m worried about though is that the development cycle of Spectralayers (these every once-a-year releases) are so slow that even if Spectralayers does get optimized it will become obsolete because everybody has already moved on. I’m noticing that a lot of people are buying these new ARM devices (I’m actually seeing people use them in my personal life because it’s more efficient and less expensive than Intel and AMD counterparts) and I’m afraid that an incredible amount of resources will go into optimization for intel/AMD chips (or maybe even for CUDA) only for it to be futile because no one uses those chips anymore… So this is a hint for the developer to start straying away from CUDA because I’ve encountered people(like college students) who say “why would I buy a Intel laptop for $1500 when I could buy a ARM laptop (that is just as capable as Intel) for $700”?
A higher FFT size does indeed mean more calculations, which leads to increased CPU time needed to calculate it. Having overlap increases the amount of calculations needed as well, depending on the amount of overlap. With this in mind it’s not surprising that a lower FFT size will give better performance, and it’s a good find to point out.