32bit recordings only have 24bits of audio

Check this thread from users and PG…

regards S-EH

A vinyl record has a dynamic range lower than that of a regular CD, and far, far less than 24-bit fixed audio. There is absolutely no reason to convert vinyl to 32-bit fixed or float.

1 Like

Forgive me if what I am going to say sounds offensive to any of you.

If you watch a VCD on a 19” TV, the video quality appears to be good enough. But when you switch to a 33” TV, VCD appears muddy. Laserdisc would be good. If you further switch to a 55” TV, Laserdisc appears muddy. You’ll need DVD. How about a 75” TV? Similarly for audio quality, it all comes down to what grade of equipment are you using to listen to the audio signals. Traditional land line telecommunication equipment uses just 8 bits to transmit human voice. So there is no point in feeding 16 bits, 24 bits nor 32 bits to a land line.

A lot of people argue that 24 bits and 32 bits correspond to SNR of 144 dB and 192 dB respectively. They think that 144 dB is more than sufficient. Their argument is true only to a certain extent. For the largest signal which can be represented digitally, 144 dB and 192 dB are correct for 24 bits and 32 bits respectively. But have you considered small signals such as tiny percussions in a pop track? Only a few bits of resolution account for these tiny sounds but such tiny details do make the audio track more enjoyable. Therefore the more bits the better up to the point at which our ears cannot tell the difference.

It is generally agreed that frequencies above 20 kHz are not audible to the human ear. Based on Nyquist theorem, a sampling frequency of 40kHz is sufficient. But the theorem assumes that the samples are continuous (i.e. analog) and not digital. Accordingly, the sampling frequency of 44.1kHz as adopted by CD would be sufficient if the number of bits is large enough. Given the limited number of bits of resolution, a higher sampling frequency would definitely help in reconstructing the original waveform.

By the way, RME ADI-2/4 does support 32 bits (integer) analog-to-digital conversion. This is the main reason why I switched from Benchmark’s ADC to RME.

No it doesn’t in your case. A better comparison would be to say that the “traditional landline” = vinyl and you’re now trying to somehow improve its resolution retroactively. It’s as if you are trying to make it better by re-recording it into 32-bits. That changes nothing though. It is zero improvement. You could say exactly the same about vinyl: Vinyl has limited dynamic range, and that dynamic range does not improve when you re-record the signal. The signal “fits inside” 16-bit digital just fine. You gain nothing.

The number of bits basically correspond to signal-to-noise ratio. So whatever that is for vinyl that’s what you have. It does not improve by storing it within a medium with a higher SNR.

That’s not how it works. For individual sounds there are no separate bits or separate anything within samples. One sample = representation of one amplitude at one point in time. Not multiple amplitudes.

The “tiny details” you hear are the result of the reconstruction of the analog waveform at the end of the process, which includes playing back all of the samples at a steady predetermined rate. A ‘soundscape’ emerges out of the reconstruction. But the samples do not have any discrete separation between sounds. None. All it is is a single value at a single point in time. Nothing more.

And as for “the point at which our ears cannot tell the difference” for the purpose of bits that’s going to be when you have your signal dip into the noise floor and you can no longer hear it because the noise is too loud. So you can take even a 24-bit fixed signal and its dynamic range and stack that on top of the noise floor of whatever equipment you’re using for playback and then you’ll see what your SPL will be. Say you have a really good room at 26dB SPL noise, and let’s just say you can’t hear any signal below that point. Then let’s just chop off 10dB for whatever reason to be ‘generous’, and finally add the remaining 134db to that noise. You would now have a signal potentially peak at 26+134dB SPL, unless I’m missing something. So 160dB SPL.

Or just do it the other way around: Let’s say you can listen to a peak that is 120dB SPL. Subtract 144dB from that and compare where you end up to the noise floor in that environment.

Your ears are going to give up way before you can make good use of those 32 integer bits.

How? The Nyquist-Shannon theorem states that you can accurately represent any complex waveform as long as the highest represented frequency is less than half the sample rate.

7 Likes

Many thanks for the info.

Regards WP

1 Like

I question if WaveLab made a decision that gives someone, like @williampong, the wrong assumption about bit-depth. If I load a 16-bit WAV file the waveform window will show the levels in dB. Fine.

If I pick linear, the actual values, and something WaveLab will do (THANK YOU WAVELAB) it will show the values, with a max value of around 32,678. All good!

BUT, if I pick 24-bit it scales the numbers into a 24-bit space. In other words, is seems to work backwards from the 32-bit float values scaled in dBs. I get why it is done. If WaveLab scaled 16-bit (64K) to 24-bit (16M) the wave form would disappear and you’d have to zoom and zoom to see anything.

BUT it would allow @williampong to see what @MattiasNYC has explained. 24-bit is MONSTROUSLY large. Each bit doubles the values. It begs the question, not if 24-bits is enough, but if 17-bits is enough! That’s the reality one realizes if they work with most audio data.

I see the same problem with that Bits Used meter. Sorry, when I first saw it I was like WTF :wink:

May I suggest:

def analyze_bit_depth(audio_file):
   sr, y = wavfile.read(audio_file)
   y = y.astype(np.float32) / np.iinfo(y.dtype).max
   peak = np.max(np.abs(y))
   y_norm = y / peak
   levels = np.unique(y_norm)
   effective_bits = np.log2(len(levels))
   return effective_bits, len(levels)

Anyway, just me thoughts. No one is going to look at linear except data geeks. Therefore, I vote it doesn’t rescale. That’s what dBs are for :slight_smile:

With the coming of ultra-low-noise processing (multi-path), 32-bits makes sense. One example is the imersiv D-1 DAC (in beta). It has a BB/UW noise floor of -146dBu (40nV) and a headroom of +28dBu, for a total dynamic range and linearity of 174dB, or 29 bits. Eventually, the entire audio signal path (from mic to power amp) will be multi-path, with similar noise specs. Reaper maintains 32-bit-perfect files (64-bit float process). I assumed that all Steinberg apps similarly maintained 32-bit-perfect, but maybe that’s not correct (?). Someone mentioned that Presonus DAW is also 32-bit-perfect.

If you read Bob Katz’ take on it though the actual benefit is lower THD+N coming from having a multipath design, not from having 32 bits of information feeding it.

The room noise is going to cover the lower parts of the signal in the vast majority of listening scenarios for all people, by a wide margin. Just take whatever noise floor you would expect and add those 174dB and see what the SPL is. It’s simply not a thing. Heck, if the room was at zero peak would be 174dB SPL. No thanks. I’m planning on using my ears again this decade.

2 Likes

The primary benefit of multi-path is profoundly lower THD+N at lower perceptual levels — immediately audible as increased spatial and atmospheric information (which Katz noted). But the only way to achieve this is via a vanishingly low noise floor in the low-path (beginning around -40dBFS), somewhere around -146dBu. Which is why 64-float processing is essential — not to achieve +165dB SPL, but to provide bit-perfect processing on the entire 32-bit signal.

Another benefit of multi-path architecture is unlimited headroom without raising the noise floor. Almost nobody needs +28dBu any more (it was a tape thing), but +28dBu assures effectively zero ISO’s at today’s max operating levels (say +22dBu).

As for high SPLs, you are right. Most music consumers don’t need it. But recording engineers do. Snare hits and trumpet blasts peak at +155dB SPL. We stick microphones on these things every day. Couple this with the threshold of hearing (-8dB SPL) and we get a professional real-world dynamic range of 163dB. That’s my world. Maybe not yours :slight_smile:

One more thought. Increasing numbers of home theater installs are getting the big sub-woofers (+148dB SPL peak). These are for VLF movie explosions, earthquakes, SFX (etc) which (at 15-50Hz) won’t damage hearing. Couple with a threshold noise floor of -8dBu gives a dynamic range of 156dB, which requires a 32-bit signal path and 64-float processing for bit-perfect result.

Ok, I think I understand better now. I suppose Katz threw me off a bit since he was using 24-bit streams to test it. My impression is still that the benefit isn’t in the source being 32-bit but rather the device itself seeing that Bob tested it that way.

Correct! Multi-path timbral and spatial improvement is perceptual even with 16-bit (CD) program. This is the really interesting thing about multi-path architecture: it’s bringing out perceptual improvement in places we didn’t expect.

This might shine some light onto the subject…

1 Like

If you are really interested in this digital tech stuff I strongly recommend a read of Principles of Digital Audio by Pohlmann:

2 Likes

To be clear, Monte’s tutorial covers basic Shannon/Nyquist theory, not multi-path architecture, which is a different conversation. You won’t read about it Pohlmann, either.

1 Like

Quick question for @maxrottersman. Are you saying that the decibel meter in WaveLab is not a loudness meter but a bit depth meter? Or are you saying that they are the same?

I am not a stubborn guy and I take facts and logical reasoning. If you can prove that I am wrong, I am not ashamed to admit so. And that’s the way for me to learn and grow. By the way, yes I am a techie but I have never looked at the linear scale in WaveLab.

1 Like

Hi William. A decibel meter shows data in “decibels”. The data was recorded as integers. A decibel is a SCALE, not a NUMBER. (sorry for caps)

Unless you think there is such a thing as half an electron. The computer only puts them into float storage to handle large numbers efficiently (less power).

Wavelab’s “linear” scale is the voltage scale (integers). An ADC only produces one set of integers (even if calculated into “float” number spaces). Therefore, whether in 16 or 24-bit it should only show those original number values.

To answer your question look at the numbers from the ADC. They never hit 24-bits of precision (signal). Others have said this. But they are drowned out by a sea of fantasy :wink:

Hi @maxrottersman. Thanks for your fantastic answer - A decibel meter shows data in “decibels”. Obviously I can’t disagree with your “answer”. If you check out Wikipedia, “decibel” is a relatively unit. It can represent relative voltage, power and many other things (such as loudness and bit depth) but only one thing at a time. I think there is no point in continuing the discussion. Let’s leave it here. Regards.

“They never hit 24-bits of precision (signal).”

Right. Not with single-path processing. Getting close, though. I’ve seen a single-path power amplifier that claims 140dB of dynamic range, just 4dB shy of 24-bits. But that’s probably “weighted” so really more like 22.5-bits.

With the coming of multi-path conversion, audio performance will climb well into the 28-bit range, due to dramatically lower quiescent self-noise. I think 2025-2030 will usher in 28-bit performance at the ADC, DAC, and power amp. Microphones may take a bit longer, where I foresee a patent fight between Yamaha, Sony, Nokia, and Analog Devices.

1 Like

I’ll be deaf by then :wink:

2 Likes

What would be the practical point though … I … and I am sure many many others … have done a #1 record at 16 bit 44.1. At consumer level streaming is it really going to make an appreciable difference? It will be interesting to see how these things actually test (as in audio science review type testing).