Benefits with 64bit vs. 32bit

When 64 bit processors compatible with the x86 architecture were introduced, they were referred to as x86-64. x86-32 (and x86-16) were used for the 32 (and 16) bit versions. This was eventually shortened to x64 for 64 bit and x86 alone refers to a 32 bit processor. The 32 bit processors are designed to handle a limited amount of physical memory maximum of 4GB but 64 bit can handle high memory utilizing 8,16 and some even32 GB.

That little checkbox in the studio setup for your audio does NOT refer to the word length of the app!!!
It has NOTHING to do with the architecture, the processors, or which VSTs you can and can not use!

It DOES refer to the size/precision the system will use when calculating individual sample points.
It is about the speed of processing and the precision. It can roughly be explained as analogue to how the waveform is quantized, or how many digits you make available after the decimal point.

If we have an integer precision of the tens place then we can represent 11 values between 0 and 100 [0,10,20,30,40,50,60,70,80,90,100]
If we have an integer precision of the ones place then we can represent 101 values between 0 and 100 [0,1,2…98,99,100]
If we have an floating point precision of the tenths place then we can represent 1001 values between 0 and 100 [0.0,0.1,0.2,0.3…99.8,99.9,100]

You get the idea. The number of Bits represent the number of different unique positions you can have, the more bits, the more precision which equates to a smother curve of a waveform.

That’s good. But hear is the thing, the more bits you use, the harder, (longer) it takes to process. Just like how it takes longer to do math with one decimal place than it does with just 10ns. The processor has to do that work, and time is critical.

If you have the following amplitudes in your waveform, its going to sound pretty smooth:

0 10 20 30 20 10 0 -10 -20 -30 -20 -10 0

However, the following would sound closer to what your ear hears, say, the singer sing in the same room:

0 9 18 31 17 9 -1 -8 -19 -34 -22 -14 0

The waveform is richer and captures more of the harmonics and what not.

BUT if the processor can’t keep up and you get this instead:

0 9 18 18 17 17 -1 -1 -19 -19 -22 -22 0
or
0 9 18 0 17 0 -1 0 -19 0 -22 0 0

That is nothing like what you would hear naturally. This is what is happening when your cpu meeter goes to the red and the mix starts to sound terrible. It is repeating the last known value (or 0) because it didn’t get the computation for the next value done in time…
(In this example I am over simplifying and ignoring the bit depth of course only being concerned with the results of computation)

Can your ear hear the difference between 64 bit Floating Point Computation and 32 bit Floating Point Computation?

I can’t!

What I can hear is the garbage that comes from overtaxing the processor. So my system is set to 32 bit floating point processing.
It is still a 64 bit architecture running 64 bit VSTs, but this way I get more “creative room” for more VSTs to do more computing in the same amount of TIME.

24-32-48-64-256 sounds pretty good to me so far.

24 bit - depth
32 bit - floating point processing
48 khz - sample rate
64 bit - processor/software
256 samples - buffer size