Vocals to hot. How?

For the first time this Sunday, I recorded a vocalist other than myself. She has a robust musical theater voice and generally sings much louder than I do, and I had trouble controlling a signal that ended up clipping as you can see in the picture. (I think clipping is the term for the flattened out part at the top of the audio wave. If not, please let me know.)

I don’t understand how it ended up happening because at no stage did I see the signal going too high. The meters on my pre-Sonus audio interface were in the green. The audio input was well below zero, more like in the -18 to 12 range. And the meter on the local channel was also around the same. I would think that if the signal was going that high there would be a meter to let me know.

I am using Windows 10 Desktop, cubase artist 10.5, Townsend sphere microphone.

And that’s another thing: when I was calibrating the microphone, it keep telling me that the signal is too low!

I hope someone can tell me where I’m going wrong.
Much thanks,
David

Hi,

Is the signal really clipping? Isn’t just the waveform zoomed in? Can you see the clipping on the meters? Can you hear it?

Please zoom into the problematic areas very closely. Then you can best see whether clipping is present at that point.

And then the question is whether you can hear the (possibly existing) clipping at all? Because if not, then you don’t need to worry about the chopped waveform.

Yes I can hear it.
Specifically, what I hear is an unpleasant harshness which is not present normally.
I’m not at my computer now but I will zoom in as you asked.
But, I think if you look at the lower waveform around measure 31: beat three, you can see the flat top, no?

Update:
Listening to it again, I think there is a harshness to her voice whenever it gets loud, even when it doesn’t clip, so you may be right, and the problem might be something else.

But It still puzzles me why the wave form should end up so big when it wasn’t on the meters…

If anyone saw that extra word at the end, my profuse apologies that was voice recognition miss reading my speech.

Here is a zoomed in section.

Do you have a compressor or limiter on the interface? Before the interface? On the Input Channel that is connected to the track? Because it looks like the signal is being limited to a certain threshold well below 0. (You would get peaks in the meters otherwise, right?)

No. The interface doesn’t have a limiter or a compressor.
The vocal channel is sent to a group channel which has a compressor on it, and then that is sent to stereo out, which also has a compressor on it. I think I did this after the fact in an attempt to tame the vocals, but that wouldn’t have anything to do with the signal written on the audio track, would it?

No. What routing happens after the recorded track is of no interest. The only way this can happen if there are insert effects on the Input Channel from which the track gets the signal. Can you check your input channels for any stray insert plug-ins?

There’s no compression on the input channel, although I suppose it’s possible that I had compression on during the recording and removed it, but I really doubt it.

You’re right about the signal flattening before zero, looking in the audio editor, it looks like it’s flattening at -6.

  • Was there any PreGain on that channel in Cubase at the time of recording?
  • Can you run the Statistics (Audio menu) on that clip and post them here?

Title;Statistics - “Hallie Vox_22”
Date;Wednesday, October 19, 2022
Sample Rate;44.100 kHz
Average RMS (AES-17) Left;-38.79 dB
Average RMS (AES-17) Right;-40.64 dB
Max. RMS Left;-19.76 dB
Max. RMS Right;-21.61 dB
Max. RMS;-19.76 dB
Min. Sample Value Left;-4.77 dB
Min. Sample Value Right;-5.05 dB
Max. Sample Value Left;-4.77 dB
Max. Sample Value Right;-4.77 dB
Peak Amplitude Left;-4.77 dB
Peak Amplitude Right;-4.77 dB
True Peak Left;-4.62 dB
True Peak Right;-4.65 dB
DC Offset Left;-oo dB
DC Offset Right;-oo dB
Bit Depth Left;24 bit
Bit Depth Right;24 bit
Estimated Pitch Left;1287.7Hz/E5
Estimated Pitch Right;1284.8Hz/E5

When you say pregain, do you mean On the vocal channel? It’s at about 11 o’clock.

Ok. Let me ask something else. From a quick look, it seems that the microphone you are using has a plug-in that changes its characteristics. Now, I’m wondering: Is this plug-in supposed to go in the input channel, so that you’ve made your decisions on the sound beforehand and just record the wanted sound to the track?

Or is it supposed to go on the track, so that you can change the already recorded material after the fact to your liking?

(And if that’s the case, isn’t what we’re seeing on the track completely different from what we’re listening to after the plug-in? I’m just thinking aloud here, sorry but I don’t have experience with this microphone.)

You surmise correctly. The Townsend sphere microphone is supposed to emulate other microphones. It takes the signals from a stereo mic and does some thing. You have to run the signal through a plug-in that appears as an insert. They recommend putting it on the track channel so that you can change it later, which is what I did, so the recorded audio should be the raw signal, as far as I understand.

So there maybe ways of softening of the vocals using the plug-in. You can adjust the pick-up pattern, filter, and axis, with quite a bit of flexibility. I don’t have much experience with microphones so I am open to suggestions.

Ah, ok, then it’s simple.

You can do a render in place of your stuff, so that the produced event (and waveform) would truly show what’s going on. Because now, we’re looking at what the microphone has recorded, but we’re not seeing the effects of the plug-in on the waveform.

True Peaks are at -4.62dB and -4.65dB, so there is no clipping on the audio file in Cubase. This recording has been limited on the way in already. Kindly thoroughly check the signal path from the microphone to the exit of the audio interface (exit means when the audio interface driver hands over the audio to Cubase).

With PreGain I meant this setting either in your MixConsole or in the Channel Settings. But having seen the stats I don’t think PreGain is relevant here.

That’s interesting. What sort of things would you be looking for?

I would check for clipping on the renders, to see if the tweaking of polar patterns and microphone characteristics produces unwanted effects.

Have you tried using the plug-in as a Direct Offline Process? I think DOPs affect the waveform display, so you would get a faster overview without having to render-in-place or bounce. Why don’t you give it a try? On a different project (for safety), record something, then select the event, hit F7, select your microphone’s plug-in, do some tweaking, hit apply, see what the waveform does. (If you’re using veeeery long events, you can cut a smaller event and apply the DOP on that small part so that it goes faster.)

If the microphone is designed to be used with the plug-in, what we’re seeing on the track is deceptive. I don’t know if there’s a “vanilla” mode where we’d just plug the microphone on a live mixer for example, so that what we see would be what we get and what the microphone’s characteristics would be then, but if using the plug-in is almost mandatory, AND we need to place it on the track, we need to evaluate the waveform AFTER the plug-in.

First, I really appreciate your help here.

Some of the things you’re talking about or a bit beyond me current knowledge, though I’m willing to try. I’ve never done render in place and I’m not familiar with direct off-line process.

With the sphere microphone, you can bypass the emulations altogether, in which case it’s supposed to be a pretty good sounding reference Mic. So I’m thinking I’ll try that and see how it sounds.

I’m still a really mystified as to how limiting got in there. I did check the whole signal path as best as I know how:
Input channel, to track channel, to group channel, to stereo out.

1 Like

Render in place takes your raw event and any plug-ins down the signal path that affect the sound and “repacks” the complete sound in a new wave file, that when played dry on a blank track will sound identical to the original signal chain. This allows to evaluate the waveform visually from an updated viewpoint, as ANY impact caused by plug-ins will be visible on the waveform. On the contrary, when we have an event and then 16 insert plug-ins doing mad stuff to the sound, the event on the track only shows what we started with, and none of the madness.

DOP is another simple idea. Instead of putting plug-ins on slots of the track, we simply select the event that we want to affect with a plug-in and apply the plug-in there and then. But when we apply DOPs, the waveform updates itself to correctly display the effect of the plug-in. :wink: (So, it’s faster to do tests with the microphone instead of doing renders all the time.)

Yes, that’s a good idea. Oh, another thing that sprang to mind that needs checking. When using the plug-in to make changes, make sure that your meters in the mixconsole are set to post-panner. That way you’ll actually measure the signal after the plug-in, where clipping might be occuring.

2 Likes