I’m looking for some general rules here like the old standard: “Audio waveforms generally should take up the middle third of the track to insure both quality sound and adequate headspace.” This is a good rule to follow, but I notice that when using libraries such and EW orchestras, when I use a virtual instrument to record certain instruments like violin, oboe, flute, etc, and then use “render in place” the resulting audio file can be very thin, in some cases taking up less than 1/4 to 1/8 of the central track space. I’m wondering if this leaves too much headspace at the expense of overall sound quality. The file seems to sound fine, but I’m wondering if I need to boost the gain in the virtual instrument to get the waveform to be “fatter” so it takes up the center third of the track? Or should I just trust that the makers of the library generally take headspace into account when recording and just leave the resulting audio track as is (as long as it sounds reasonably good" and use the gain and faders on the track to adjust as needed when mixing. Thanks for any thoughts on this.
This notion is a bit antiquated as modern audio software as well as ad/da’s can operation on much higher bit depths that the 16 bits that were available when such rules of thumb started to spread.
I would say, yes. I also recommend your projects are set up with at least 32 bit floating point precision.
As @mlib suggested: Always render files, that you will still work with, to a floating point format like 32 bit float. Then any changes to the volume level are (kind of) non-destructive.
That gives you the freedom not having to look at a waveform anymore but just to listen to the audio. If it sounds right, it is right.