Hi,
I was recording vocals the other night, And when I went to do a first pass at a mix, I encountered some distortion in the vocal track when I tried to raise the level in various ways. This led to some experimentation:
I notice that if I raise the Level too high using the volume handle on an Audio event, the result is distortion.
Distortion also results if I raise pre-gain too much.
It seems that this distortion is occurring within Cubase, since it happens no matter what volume I have my Speakers or interface at.
However, raising the Fader To its maximum never causes distortion.
Is this observation correct? Why is this the case?
More importantly,
Can anyone suggest guidelines, or best practices for avoiding clipping? I still find myself fumbling through this aspect of recording.
This can become a bit of a deep rabbit hole. Some quick starting points for any exploration you may want to undertake:
Cubase uses floating point processing for it’s internal audio summing (fader movements) therefore avoiding distortion during mixing processes.
Why that works takes a bit longer to explain, than I currently have time/enthusiasm for - but if you search for advantages of floating point audio processing, you should quickly find myriads of explanations.
Try to read up or watch videos on understanding and using Compressors and/or Limiters. Those are the most commonly used devices/plugins to control/avoid clipping.
The Cubase channel strip has both of those, and the Cubase included audio fx plugins also have versions of those.
If I may follow up,
In the attached image, where the waveform is too fat for the borders, is that a visual indication that the volume of the audio event has been raised too high - to the level of possible distortion? I always assumed that it is, but I’ve learned not to make assumptions.
the easiest way to avoid distortion on the way into Cubase is to look at the meter of your input channel. It should always be well under 0dB.
Adjust the pregain on your audiointerface/micpreamp first as well as the pregain of your input track if necessary.
Once the signal is distorted on the way in and it is recorded in Cubase there’s no turning back. It’s always best to do a short test not to run into this sort of problem.
The target level inside Cubase is a matter of personal taste: Many people prefer it do be somewhere around -18dB to meet the requirements of analog gear emulations. The signal itself has the same details at -30dB as it has at 0dB within the digital realm so that doesn’t matter. Inside Cubase there is plenty of headroom above 0dB but you need to keep it under 0dB anyways later on during mixdown. So keep that in mind as well.
Compressors and limiters keep the signal at bay once they hit the threshold but this will gradually introduce saturation and distortion. This is an effect that is welcomed up to a certain point depending on the sound you are looking for. Nonetheless, it is still a clever move to track a clean and non-distorted signal under 0dB going into Cubase to have all available options later on. Again, the meter is your friend
It seems complicated at first but it’s actually quite easy once you get your head wrapped around the concept.
You’ll get there
Thank you for that comprehensive explanation.
There’s still something going on that I don’t understand. You could see from the Image that the waveform is flattened, But Why?
All the meters (Stereo in, the audio channel in question) are well below zero, in the -18 range that you recommended.
The volume handle seems to automatically go to the highest setting. Even if I bring it all the way down low before I start recording, it pops back up. I don’t have any Automation. It’s confusing, because sometimes the waveform-too-big-for-the-box does seem to correlate with distortion, though not in this case.
I’ll look into the volume setting. I wasn’t aware of that Parameter.
Update:
I just that noticed that the vertical zoom in the sample editor is different than the Vertical zoom in the timeline. In the timeline, the whole event rectangle gets bigger and smaller, and the waveform within it changes proportionally.
However, in the sample editor, only the waveform changes size, And the DB numbers on the left change proportionally. To make matters more confusing, If I select an audio event in the timeline, and then roll the mouse wheel, The waveform changes size as It does in the sample editor when I adjust zoom, However, in this case, It is also changing the volume.
And I still don’t know why in the timeline, the waveform is exceeding the space Allotted.
I think I understand what has been confusing me. It’s that there are two zooms as shown in the image above. The one on the bottom makes the whole event bigger and smaller. The one on the top only controls the size of the waveform. However, In appearance it’s exactly the same as clicking on the audio event and rolling the mouse, except that in that case the volume actually changes, where As with the zoom parameter volume doesn’t change.