Can i change bit depth half way through a project?

Is it poss :question:

thanks, Kevin :slight_smile:

Why? If you’re trying to downsample things, use the BitCrusher insert.

Hi Larry, all my previous projects have been set at 44.1 and 16 bit, I’ve just noticed that the current project I am working on is set at 24 bit,(dunno how it got changed) just wanted to change it back to 16 bit. :slight_smile:

cheers, Kevin

You can change it anytime to 16-bit, which means all the new audio you record will be 16-bit, but the existing audio files will still be 24-bit, but you can convert all the 24-bit existing files in the audio pool to 16-bit. It might be an idea to use “New & Replace in pool” and then delete the 24-bit versions once you are sure everything is OK.

There is no problem in having mixed 24-bit & 16-bit audio in the same project, Cubase always works in 32-bit anyway, but if you want everything in 16-bit, then you will need to convert the files. You can select them all and do it with one command.

That’s good to know, I had wondered about this myself some time ago when I had noticed some of my song projects were started at a different bit rate, accidentally from not paying attention when setting it up. When actually paying attention, I’ll always choose 24 bit.

thanks for the info …much appreciated…Kevin :slight_smile:

Noooooo…use 24-bit for the love of all things holy.

I can’t tell the difference from a track recorded at 24 bit and a track recorded at 16 bit…(and anyway stuff usually gets dithered to 16 bit I believe), would love to blind test anyone that says they can… :slight_smile:
and for the purpose of yer average pop song ( in my case very average ) I reckon it don’t really matter…but what do I know :frowning:

the music world managed fine before 24 bit was available and this quote from old fecker says it all better than I could.

''The general music consuming public doesn’t buy tunes to listen to the sound, they buy them to listen to the music.

It is the music which engages them at an emotional level; the sound quality itself does not (obviously, the sound quality has to be at a sufficient level so as not to interfere with the emotional punch of the music).

MP3 and decades of FM radio have shown us one important thing: that the general music consuming public does not care about sound quality!

They only want music that is good enough quality to listen to. Anything beyond that would probably be a waste of time where most listeners are concerned

Only engineers and audiophiles care about sound quality, but we are only a tiny fraction of the music buying population, and so we don’t really count in the final sales figures’'. end of quote.

Maybe all I’ve said just demonstrates that I don’t really get what 24 bit can do for me , who knows :slight_smile:
suppose what I’m trying to say is…the song is the important thing, not the sound quality and as old fecker says…as long as it’s good enough…etc…etc…all the tracks on my Soundloud page were recorded at 16 bit (bar one recorded at 24 bit by accident) and they are perfectly listenable…

Kevin :slight_smile:

You shouldn’t if:

  1. You are recording close enough to 0dBfs (peaks).
    and
  2. You are not recording extreamly dynamic performance in extreamly silent room.

16 bits is enough for 90% of recordings (more like 99% in home studio environment). Even 12 bits would be more than enough for all electric/amplified instruments.

This isn’t a good excuse. There are still situations where you have advantage on recording at 24 bits even if final result will be reduced to 16 bits.

PS: Didn’t we already talk about phrase “dither to”?

:unamused:

If your signal resides within 0dBFS and -96dBFS, the signal (recorded information) is absolutely the same in 16 bit and 24 bit files/recording. 100% a like.

0dBFS is the same whether 1,2 or 24 bit. You have to count from top and downwards
One bit equals 6dB dynamic range. 16 bit x 6 = 96dB dynamic range. 24 bit x 6 = 144dB dynamic range

One bit have only 2 values (on/off, 1/0).
2x2x2… 16 times is 100% the same as 2x2x2… 16 times out of 24.

The last 8 bit in the 24 bit has only to do with extended dynamic range, NO finer resolution (biggest mistunderstanding in digital audio).

Sorry for my english.

PS. Please take a look at those Monty Montgomery (xiph.org) videos.

I’ve watched the xiph.org videos a few times.

My memory is hazy, but I distinctly remember a few projects that I had recorded in 16 bit that sounded incredibly brittle on the higher end frequencies. Perhaps it was my recording technique or signal path that was responsible for this, but when I re-recorded them in 24-bit the sound quality went way up.

Note that I’m talking about recording in 16-bit, not rendering to 16-bit. And, yes, I know that Cubase stores things as 32-bit float internally, but that’s how it is stored and not how the A/D conversion takes places prior to it being written to disk.

That was my first impression too way back then, and after that I always have recorded at 32 bit float.

Is there a pseudo-plausible scientific explanation of that?

I thought the 24-bit vs. 16-bit recording difference was just in how soft the softest thing one could record without hitting the noise floor was.

Practical application: in 24-bit recording, don’t have to record quite so close to 0 dBFS to avoid the noise floor as one would in 16-bit recording. Staying away from 0 dBFS means the preamps and A/D converters don’t have to be driven so close to their max outputs, where maybe they’re not as accurate as farther down.

OK, fire away - help me understand which parts of that may be wrong!

No it isn’t. Either quantisation distortion or A/D converter’s dither noise (if it dithers) is more prominent with 16bit recording. This of course if and only if your analog chain (acoustic environment/mic/preamp/converter’s analog part) has noise floor under -96dBFS (after recording levels have been set). If it doesn’t (which is usually case in home/project studio) you can’t hear the difference anyway, because quantisation distortion/dither noise is masked by other noises.

You are absolutely correct. That’s it. Nothing less, nothing more.

I have heard this same thing all over the place and can’t understand it. OK, I have studied only the very basics of electronics engineering in university, but in my understanding it’s easier to make A/D converter be more linear at higher levels than lower levels. In my opinion only reason to design A/D converter with non-linear “top-range” is to drive analog circuit into distortion zone near 0dBFS in order to artificially have greater signal-to-noise ratio on system specifications.

(EDIT: and then there’s dbx “type IV” converter which is non-linear at top-range in order to create “tape-like” saturation instead of clipping with high-level signals)

Please, someone with better knowledge of electronics engineering: correct me if I’m wrong.

The point of NOT pushing so hard levels is not about the digital domain, it’s about gain staging through your analogue chain. To avoid to much anaologue distortion. That’s why pro audio gear are calibrated to 0VU = +4 dBu = 1.23 Volt (all RMS).
Converters have no sound (except bad filtering and artefacts), but they tend to be calibrated to the same level. The above mentioned 0VU = +4 dBu = 1.23 Volt (all RMS) = -18 dBFS. To more easily keep a good gain staging method throughout your recording chain, and also throughout the mixing chain in the world of good analogue emulating plugins - which act “dynamically”.
Push them to hard and you’ll get distortion, as in their analogue counterparts. So proper gain staging will also apply in this digital day and age IMO.

You know what I meen :wink:. Yes of corse quantization errors or dithering come in to play.
If I moderate my self to say that down to -90dBFS, even the playback is the same (errors and dither are perfectly masked, and not listenable in a normal audio production).

Remember that both analogue and digital audio technology has to be in place. No analogue involment equals no sound in, no sound out. So in practice neither 16 vs 24 bit, or 44.1 vs 96k don’t really matter in real time recording or real time playback (and processing).
There will always be a noise floor greater than -96 dBFS in a normal audio production (pre-amps, mics etc), and the end-dithering will mask quantization errors in low level signals. Reverb tails and fade outs etc…

When it comes to much OFFLINE processing, you will gain in using 24 bit or even 32 bit FP.

All that said: I mainly do my recordings in 24/44.1 (with some clients asking for 24/48 or 24/96).

PS. We are talking about practical audio production here? Or are we discussing theory?