The Advantages of 32 bit floating point

SoundDevices is 32bit-fp
Low Level Signals: 32-bit Float versus 24-bit - Sound Devices

they have a good explanation here

I would say it’s fairly relevant if you are working in film/post, where you have to deal with maybe a blank gunshot, and quiet dialogue all in a single take.

Lots of other scenarios, I’m always pulling little random unintended nuggets of sounds and speak during sessions picked up by mics.

All of Steinbergs UR interfaces are 32bit.

the new Prism Dream is 32bit
ADA-128 Modular AD/DA Convertor - Prism Sound

the Crane Song Solaris is 32bit
SOLARIS (cranesong.com)

I have been looking at the UR 816c and the UR44. Both of these offer 32bit operation. Surely as a front end/soundcard this offers some significant audio advantages give the significant dynamic range 32 bit offers.

The actual electronic components in these devices do not output in floating point format. Even back in the Soundblaster days, when Cubase would read a 16-bit sample, it was processed internally as a 32-bit floating point value (between -1.0 and +1.0) because that’s how VST takes advantage of the maths coprocessor (FPU).

Perhaps I’m just too sceptical (and I’m happy to be proved wrong) but I simply don’t believe there is any advantage in having an interface whose driver provides data from a 24-bit converter in 32-bit floating point format.

2 Likes

hmmm… well something would need to be going on differently if they are able to do this at the recording-AD stage without clipping

A bit depth of 24 already gives you a theoretical dynamic range of 144dB. For comparison, the difference between a quiet room and a jumbo jet taking off over your head is about 90-100dB. Will you need a dynamic range of more that 144dB?

An analog-to-digital converter with a resolution (bit depth) of 32 integer bits has a theoretical dynamic range of 192dB, but that’s around the threshold where sound, in air, ceases to be sound and becomes a shock wave. No musical instrument can create such dynamics, and no microphone can record it.

Why on earth would one need the 1528dB dynamic range of a 32-bit floating point A/D converter, if one actually existed?

1 Like

At this point I am not certain anymore, a developer might be able to answer this: It used to be the way that if the computer wants to retrieve a 24 bit data piece, the system has to transfer actually 32 bit of which 8 bit will then be discarded. So in terms of data throughput and strain on the computer 32 bit might be better.
Regarding the extra disc space needed: storage space is rather cheap but you know your own situation much better than I do.

This is about resolution and noisefloor, not the loudness ceiling… just like recording at 88.2/96khz isn’t about recording sounds that have a 80khz frequency. 88.2/96khz is a - sampling rate - not a recordable frequency ceiling.

I believe the Cranesong Solaris is actually 32bit and that the latest AKM parts it uses are also 32bit but I could be wrong

AKM Launches New 4-Channel 32-bit A/D Converter and Two New Multicore DSP Audio Processors | audioXpress

AK5397EQ | Audio A/D Converters | PRODUCTS | Asahi Kasei Microdevices (AKM)

AK4490EQ | Audio D/A Converters | PRODUCTS | Asahi Kasei Microdevices (AKM)

AK4458VN | Audio D/A Converters | PRODUCTS | Asahi Kasei Microdevices (AKM)

From what I understand, when recording from a 24bit AI, Cubase converts the files to a 32bit float format during recording (if that’s the option chosen), so without knowing the details, I’m assuming that to be a potential bottleneck. However, once the files are in 32bit float, is playback more efficient than playback with 24bit?

As most of my hours with Cubase involve playback, I’d like to know which format is the most CPU efficient? I also keep reading that Cubase uses 32bit float internally, but I use Cubase in Windows running at 64bit, what’s ACTUALLY going on please?

1 Like

I really would like to hear from the team at Steinberg and learn from them.

1 Like

I’ve been working on Digital Signal Processing for quite a few years so I thought I would give you my 2 cents (please forgive me if I am re-stating things that were already said. TBH I have not read all answers). To me it is pretty simple, and there are only 2 things you need to know:
1 - assuming you have (very) good D to A and A to D converters, your ear won’t make a difference with higher resolution OF THE FINAL RECORDING RESULT. This also assumes that you don’t get realtime errors during the reading process of your file (like it was the case with many CD players in the past)
2 - the higher resolution, on the other hand, can be important when you have several/many layers of treatments on your original recording, as the errors multiply with the number of math operations.
So the conclusion is that it is better to start with higher resolutions (bit rates and sample bit sizes), and in the end you can convert the resulting recordings down to 16-bit 44.1 kHz sampling.
Now if your question is specific to floating-point vs integer encoding, it all depends on the number of bits in the encoding which are significant (the mantissa). If your mantissa has the same number of digits as the compared integer, they will yield to similar results. But the extra bits used for the exponent part in a floating-point encoding will allow to encode with the same exactitude very small and very high numbers, at the extra cost of a few bits used for the exponent, of course.

2 Likes