Does 24bit 96khz matter in the box?

Hi guys!

I am a professional long time producer and have some questions that i couldnt find online…

Normally, and for the last 30 years, i have been producing/writing/mastering at 44.1/16 and now 44.1/24bit.

Now, after using new Adam speakers for 6 months, i find my sound a bit plastic compared to other songs from other producers.
The Adams have very bright tweeters, and they are very revealing compared to my old Neumann’s.

Can it be that me mixing in 44.1 is the reason of this thin and “plastic” sound?
Its not really harsh, but defenitly something in the top end.

Will higher sample rates give me a better sound with more definition or is this just a wierd feeling i have? Its all so subsective and personal…

Also, when i convert a big, finished project to 96khz, i find i can push my masters a lot more, at least 1 db more.

I use the Elevate plugin on 26 bands + clipping on my last stage.
I use use clipping without oversampling after Elevate.

Somehow, i can mix louder with 96khz it seems…

Why is this? I thought sample rate was more for recordings?

Or shouldnt this matter at all?

My question related to my own loud techno and dance music, everything is in the box, some samples, vocals, and a LOT of distortion and clipping.

I also produce and mix and master more commercial radio stuff…

Anyone has a good opinion or explanation on this? How 96khz would matter with mixing? Or is it just for recordings?

Link to another thread is welcome!

Thanks cubie buddies!!

If you go down the rabbit hole of higher sample rates you’ll possibly never come out of it. My take on it is that you can’t really hear high frequencies to begin with, especially if you already have 3 decades worth of subjecting your ears to music, and since you can’t you won’t make different decisions when mixing just because of those higher frequencies.

Now, if you have either a converter or a plugin that doesn’t work that well at 44.1kHz and works better at higher rates. It basically comes down to distorting the signal with content that folds down into the audible range. For converters these days they generally sound fine. It used to be that some sounded “better” at 88.2/96 but these days they should sound about the same there as 44.1/48 because all the filtering is done at a very high sample rate internally before down-sampling and delivering to the DAW.

Plugins can oversample if they need to. If they don’t and instead create unwanted artifacts / aliasing you could maybe gain something by running at a higher sample rate. But then again, if you can hear it then you could just pick a better plugin if it doesn’t sound good.

And one thing you definitely don’t get is “more definition” in the sense that a 4kHz sine wave is better “defined” when recorded at 96kHz. That’s not really how it works out technically. You get an extended frequency range that you can represent, not better “definition” of what is captured.

It could be because of intersample peaks, or “True Peak”.

1dB is nothing to the consumer though so I wouldn’t worry about it.

thanks!

i really had the misconception that higher sample rates (like bits) gives you more headroom for plugins to work with, its all so complicated.
My cpu also went through the roof after opening big maxed out 44.1 projects hehe
I have a pretty high quality UFX II RME interface (bought new last year)

What sample rate do you use?

And for what kind of projects, if i may ask? :slight_smile:

1 Like

I do post-production (mainly TV) so the standard numbers are 24-bit fixed and 48kHz.

I typically get annoyed if anyone ever gives me anything other than that. 16-bit in post-production is often too low in practice because people don’t know how to record properly, and there is zero reason for higher sample rates - all it does is either screw up AAF exports from their editing software or at best slow me down.

Maybe interesting if you didn’t already know this; but a ton of post engineers are pretty brutal in how narrow the frequency range is on the human voice. Whatever 96kHz would provide in actual higher frequencies would have zero chance of being reproduced in most work I bet. I think Atmos spec has main speakers guaranteed to reproduce “only” 16kHz (+/-3dB). So for a lot of us stuff at 30kHz or above is just completely meaningless (for delivery).

I think the exception is probably sound designers where there might be a reason to record really high rates to then get different results when dropping the playback rate, or maybe in situations where money is no object and the playback rigs are powerful enough to do the mix at high rates and then deliver 48kHz anyway. And maybe it’ll change eventually for most, but for now 48kHz is all I see and all I’m asked for.

1 Like

As I understand, the main justification for using sample rates beyond 48kHz (such as 88.2kHz, 96kHz etc.) is not to make use of these extremely high frequencies that are beyond our audible frequency threshold. Instead, higher rates are used to prevent aliasing artifacts when using non-linear processes such as saturation, compression, distortion etc.
However, the downside is that these high sample rates create much larger files sizes and this in turn increases the processing load on your hardware.
For me, I compromise by working at 48kHz 24-Bit, and I deliver most jobs at that spec as well. Some jobs I deliver at 44.1kHz 16-Bit, and for those I downsample and dither as the very final process.
There are some interesting uses for 32-bit floating point recordings, especially when digital clipping could be an issue during recording. However, I have yet to have had a use for this format.


As far as your mixes having a brittle top-end, my first suspicion would be that 1) your processing generates audible aliasing artifacts or 2) the clipping you are using is creating audible square waves.

  1. To prevent audible aliasing, apply subtle low-pass filtering or drop the top end with a high shelving EQ to the signal BEFORE non-linear processing. Also enable 2x Oversampling on the plug-ins, this will prevent audible aliasing artifacts from being generated.
  2. It’s usually best to only clip percussive signals but never clip the “meat” or sustain phase of a signal. That is not to say that someone might want to do that of course. YMMV.
3 Likes

thanks i will try this :slight_smile:

usually i dont do oversampling on clippers because it sounds better to me on single tracks, like drums/percussive/transient stuff

also, it sound justs louder to me, and I win 1 or 2 db when i dont oversample :slight_smile:

because i use so much clipping it could be, what you just described.
On almost every channel or stage i have some kind of clipper, plus on sub groups again, and then again on mastering.
So that was my original idea and feeling too…

on single tracks these clippers all sound good and do not “distort” the audible part of the sound, it just cuts off a few db most of the time.
it even adds punch and warmth sometimes, you know what i mean…

together, it looks like its too much for the signal as a whole and all the aliasing starts to add up or something, giving me this harsh sound in the high end, amplified by the ridiculous ribbon tweeters of the Adam’s :slight_smile:

-Maybe im gonna write a clipper that has adjustable LP and HP build in, i use Waves StudioRack in parallel for that most of the time.
-PeakEater is a great open source Github project that can do that maybe…
-I think stock Distroyer has LP and HP
But i never used it to clip because it has no real time graphics