Practical difference between 16 bit and 24 bit

My goal is to understand the practical difference between different resolutions when converting an mp3 file to wav (during the import process).

Well, my condenser mic is a Behringer C-3 (not expensive at all) and the noise level is the same no matter the resolution. So why would I use 24 bit? See my point? Or maybe I’m missing something…

Thanks for contributing to my topic. If noise of the mic, cable and interface is the same no matter which recording bit resolution I use, why would 24 bit make a difference? I’m not trying to argue. I just want to know if there’s any practical reason why I would choose a higher bit resolution than 16 bit (in my apparently “bottlenecked” case).

Interesting.

So is clean water. Doesn’t mean we should waste it on/with things we don’t really need. :innocent:
Thanks for taking the time to write so many lines!

For that I don’t think there is any need to go beyond 16bit.

Some basic research on the topic of audio encoding will answer your question and let you see how a significant part of said answer is embedded within your question.

Well for a coder, 24 bit allows you to describe an analog with higher precision. The numerical range for 16 bit is: 0 to 65,534 The range for 24 bit is: 0 to 16,777,214.
Now to put this into a practical demonstration, if you stand at the 0 yard line of a football field that is 100 yards long and you throw a football as far as you can, the resulting point of impact can be described much more accurately with the 24 bit ruler than the 16 bit ruler.
Imagine the ruler being stretched from the zero yard line to the 100 yard line. On one side the ruler has 65,534 marks. The other side of the ruler has 16,777,214 marks. You can see how 24 bits gets you much more precision. In football a couple of millimeters is no concern. In analog to digital conversion it makes a huge difference.
Hope that helps.

This has nothing to do with bit resolution. It’s about bit depth.
24 bit allows for more dynamic range than 16 bit.

1 Like

That metaphor is sort of true, but that doesn’t answer what the practical consequence is. The practical consequence is the larger dynamic range due to a lower noise floor.

1 Like

Hi! So what you’re saying is that “the ruler” (i.e. the bit depth—thanks, @Mike_McCormick, for correcting me) has the same length in both cases, the only difference being that the 24-bit side has smaller units? If yes, I don’t think this is true. And I’m saying this because in a 16-bit Cubase project, noise below -96 dB doesn’t get recorded. Test it yourself. I’m really curious to find out if you’ll get a different result. Read the “proof” part of this post (watch the GIF).

Difference…in sound? Nothing…from mp3 in…nothing at all, it will be similar depending on the signal level of the mp3 of course…You are copying a bunch of data into a higher resolution copy anyway

This knowledge has of couse been blasted all over net, yt etc etc and Dan Worall prob covers it more than enough…I coudnt tell if you were just having fun with us…either way…

BUT

If you are using any sort of gain standardisation at eg -18db then I have found (back in the day) it does matter unless you dither extensively prob using something decent like MBit…64 bit mix engine in cubase…meh…you cant even overdrive the channel signal if going back out the 32bit anyway…so the arguments are a bit pointless tbh.

OTHERWISE

Just get used to the sound of frying bacon…you can hear it very nicely on old 8 and 12 bit dig synths…which I actually love and just use spectral editing to clean up (the venerable mt32 is a full blown d synth engine…has some nice hidden gems actually and sounds so warm)

The OP doesnt have an objective answer…if it sounds good (to you) it is good. I still use old convertors for diff tracks…in particular the AD on old Sony ES55 dtas are particularly cool and the S series Roland ie S550 etc does something magic…but its a sound not a theory.

Cheers

In the same league… Fairlight CMI II : 8 bits/32 kHz

Fairlight_CMI_IIx

—> Peter Gabriel, Stevie Wonder, Herbie Hancock, Geoff Downes, Mike Oldfield, Alan Parson…

:grin:

2 Likes

As a practical matter, the cases where 24 bit has a noticeable advantage over 16 bit are rare. In particular if you recursively process an audio file, then 24 bit has an advantage.

When recording, there is a chance you might benefit from this advantage, so it’s generally accepted as good practice to record at the higher bit depth, just in case you need it. The extra disk space required is tiny by today’s standards, so there’s little downside to using the larger bit depth.

1 Like

Thanks, but when you say “recursively process an audio file”, what do you mean exactly? (practically speaking.) Why does a 24 bit audio event has an advantage over the other one? Which is the advantage?

One post in this topic says that I have nothing to worry about when using digital audio in Cubase (which is a something I tend to agree with).

As far as I know, losses can occur when rendering with plugins applied. The standard format for the final output is usually 16-bit. However, if you record and process in 24-bit, you have more headroom, allowing you to preserve sound quality better. When rendering, you can then apply dithering to optimize the quality when converting down to 16-bit. If you were to process and render everything directly in 16-bit, the final result might not reach the same quality.

It’s also advisable to use a higher sample rate than 44.1 kHz during processing and only downconvert to 16-bit and 44.1 kHz for the final render. In a pure 16-bit workflow, the final result, especially with intensive processing, could potentially be of lower quality. However, don’t take this as the ultimate truth, do some research on your own. There are books and many resources that cover this topic in detail. Depending on how deep you want to dive into this, you might come across different opinions. Ultimately, you have to listen and compare for yourself.

2 Likes

So this is why Steinberg developers added this slider? So that people would see better the waveforms of 24-bit recordings (which I assume are at a lower level)? Audio editing is somewhat new to me. I mainly used Cubase with MIDI and VST instruments… These could all be rhetorical questions… The answer is obvious…
Thanks.

Interesting article.

1 Like

I like this article. It explains the bit resolution stuff quite well.
I would like to add an information: The article seems to be taking the perspective of someone who mainly records audio and delivers a final master to the clients (streaming platforms et al). I think there are many people out there that do not record (they are using either MIDI or samples from internet libraries). I also think that creating the final file is probably done the least often compared to bouncing or rendering-in-place.
So, the article has a certain emphasis while your own, dear reader, emphasis might be totally different.

Furthermore, there was one paragraph, which was less well written then the others:

A word of warning

Something about 32 bit floating point that’s worth considering is this… Although you have the ability with 32 bit floating point processing to let your audio go past 0dBFS in your DAW without clipping, I still recommend that you treat 0dBFS as a ceiling. This is for two reasons.

Firstly, you may have older third party plugins that you use which don’t operate at 32 bit float. As such, these plugins will clip if the signal goes over 0dBFS.

Secondly, some plugins have a ‘sweet spot’ to consider. This generally occurs in plugins which are designed to model analogue equipment. As such, you will achieve different tonal characteristics based on the level at which you send signals into the plugin. Pushing really high levels into these kinds of plugins is not likely to result in the best sound, even if they operate at 32 bit float.

  • I don’t think there are any plugins left running on anything but the floating point format.
  • The sweet spot stuff is true for some plugins that are trying to replicate ananlog effects - but I think it is pure stupidity from the plugin developers to program stuff like that. Instead they could have a button “sweet spot sound on/off” and allow to use the sweet sound on every input level
  • here are two reasons why everybody should use 0.0dBFS as a ceiling
  1. Cubase’s level meters only show values up to 0.0dB.
  2. Effects like limiters, compressors and such often use a threshold parameter. Such a parameter deals with absolute dB values (unlike an EQ that deals with relative dB values). So if your driving your channel at +20dB forget using your compressor’s threshold as that only goes to 0dB.

People that use outboard effects also need to respect the 0.0dB ceiling as the signal will be converted to a fixed point format when being send to the audio interface.

Summary: If it weren’t for the stupid programming choices of plugin developers and the artifical restrictions of meters and threshold values, (and old habits like staring at a VU meter) users that stay in-the-box would not need to worry about gain-staging and could purely mix by ears, without looking at meters at all.
Only the final fader has to bring the level back to below 0.0dBFS.

2 Likes

I don’t think it would work that way. The analog equipment these types of plugins are trying to emulate react dynamically to the incoming signal. Much like a compressor or an overdrive—increase the signal and you get a greater effect. In fact, a lot of these analog emulations are doing just that, compressing and distorting. Although many plugins already have an “analog flavor” on/off switch, you still need the gain to drive that effect.

Oh man. The wonderful world of software development, where you can create entire worlds out of nothing.
Unlike the world of electricity, where you are restricted by the laws of physics.

Free your mind.

I thought we were dealing with the world of audio (which is also restricted by the laws of physics).

Could you please give an example of how such a “sweet spot sound on/off” would work in practice?

I don’t think this would work - everybody’s sweet spot is in a different place.
(Sounds like a joke, but I’m serious. Think the Three Bears porridge,)

“My goal is to understand the practical difference between different resolutions when converting an mp3 file to wav (during the import process).”

Theres actually no quality point to this altight. The data is already lost with the original mp3 conversion. But Cubase will want to convert it to PCM WAV - 16bit or 24bit depending on your settings - imagine so it can be uniform with the other files in the project and also to be in a better format for further processing

1 Like