Relating EQ settings to harmonic elements in tracks, mix?

Greetings,

Re: EQ, Harmonics, Timber:

When EQing tracks how much, if at all, do take into account an instrument’s pitches and range of timbrel emphasis. The new Frequency EQ has a lot of features and it’s so easy to assign bands by pitch, but this is not about using that Plug-in per se.

We know that the tone color, or timber, of any musical instrument (including voices and non-tonal percussion) can be characterized by its spectrum of harmonics, or overtones and how those are emphasized in an instrument. So, when you’re EQing, what sort of consideration do you give to harmonic content, does it guide your process of EQing?

I’d like to improve my EQing and think relating more to musical considerations is needed. For example, a song is in A Major, obviously we have a lot of information along the Tonic Octaves – 110Hz, 220Hz, 440Hz, etc – would you discuss looking at, say, odd or even harmonics when tone shaping either tracks for mix? There’s basic orchestral frequency range charts we have (example below). I am getting some pleasing sounds with complementary EQing, eg., two guitar tracks, Cuts and Boosts complement to hence make each more distinct – but I’d like get better (I know it will take time). I’m hoping to evolve a general approach to EQ involving musical values, be it an individual instrument or a full mix. I’m not asking for particular favorite sounds or settings or equipment or “tricks,” but however you want to answer is cool. Thanks.

Frequency Distribution Chart

Take care. :slight_smile:

In my experience it’s not really simple at all to take the approach you’re talking about, but fortunately - I think - the solution is far, far easier. But starting with why it’s difficult:

Imagine that you have an instrument with a very wide range of frequencies, a grand piano for example. As you can see from your chart it’ll cover about 30Hz to 15kHz (roughly). Now, the overtones are by definitions related to the fundamental, and that makes them “dynamic” in the sense that they “move” in relation to the fundamental frequency that is played. The first overtone of a 100Hz note is 200Hz, but the first overtone of a 500Hz note is 1kHz.

An EQ however is “static”, in the sense that it doesn’t move in relationship to the frequencies that it is processing. If I set my EQ to cut at 1kHz that will affect the first overtone of the 500 Hz fundamental note, but a different part of a 100Hz fundamental note. So it doesn’t really make much sense thinking about it that way (except in rare cases) in my experience. In order for an EQ to ‘follow’ you pretty much have to automate the EQ. On the other hand, that’s also what we do, but indirectly, by lowering the volume of instruments that get in the way.

I think the easier “approach” is just to user your ears and sweep the EQ until you get what you think sounds good. You can do it by adding content to an instrument, or by cutting in another to make space. It’s obviously pretty unscientific, but once you’re used to it it’s fast enough, not to mention an approach tested and proven over decades by great engineers.

Another issue is that EQs add phase smear, so it’s not that simple to dial in an EQ with a super tight Q and get a nice boost/cut of a particular frequency across a mix, or even instruments. It may be fine for a few, but once it adds up it can get nasty. On top of that it’s one of those things that one can get used to while working only to come back the next day with fresh ears and go “uh-oh, what did I do!?”…

So also on that second consideration where you’re looking at a key I would argue that it still doesn’t really make sense, at least not practically. In this case you can simply think of your overtone structure being based not on an instrument’s note - which moves around as I said above - but being based on chords - which also move around in most music. So even if you’re in A major and are looking at working on overtones based on that key, there will be different relationships for the very same frequencies depending on what chord is producing them. Just like 1kHz has a particular relationship to 500Hz if the latter is the fundamental, it will have a different relationship to a 175Hz fundamental. Add to that key changes which can occur very briefly. It’s totally possible to modulate to another key for only a bar or two.

The one instance when it’s necessary to pay attention to this is if you’re facing either a bad arrangement/composition, or a bad instrument. So, I’ve had issues where for example a recorded drum is slightly off pitch relative to the key of the song. The result to many people’s ears might not be that something is out of tune, but instead that the mix is ‘dense’. I’d say especially with low frequency notes where it’s harder to hear pitch this can happen. In those cases it makes total sense to EQ out things that aren’t in the key or chord you’re in, although often it can also be possible to just use brute force and pitch-shift the instrument causing the problem.

Anyway… those are my thoughts…

Someone (Alexis I think?) posted about an EQ plugin a good while ago that had a key-tracking algorithm.
I’m guessing that only works properly on monophonic material, but that does what you’re after.
As Mattias says, doing that manually is the only other way and that’s very labour intensive. I only ever take this approach when I have to compensate for resonant frequencies in an instrument or room. These fundamentals and their overtones are always the same, so they can fairly easily be EQ’ed.

Mattias, Thanks, that’s all very helpful. Yes, the overtones are moving along with the harmonic motion of the piece, but if we’re working with stereo then all the EQ’s on the parts need to work together and I do hear that in the best references I’m working with. I’m just looking at the EQ function here and I understand that ultimately it’s the full chain that creates the final sound of any track or mix. I do have a sense of how to look at EQ better. It’s really amazing what the basics like a 300Hz cut can do and so on. I don’t think I’m hearing the phase issues clearly enough. I’ll have to spend more time with that. Between the channel strip, the studio EQ, the new Frequency EQ and even the graphic EQ plug-ins, the program provides a lot of choices. More work to be done.

Strophoid, that sounds like an interesting plug-in and it probably has some good applications as you suggest. I’m concentrating on working with the basic Pro Nine EQs, but whatever someone’s working with is fine and I’ll build a stronger library over time.

I’m fascinated by the relationships between a Project’s (or any program’s) harmonic content and how that relates to EQ. For example, I was wondering for engineers, is it typical to cut or boost with relationship to the program’s tonal frequencies or not. Similar to “boost wide, cut tight,” for EQ, which while not always what’s needed, does often help bring more clarity to a part or even to the full mix. But, that’s a bit over simplified, since that sounds like a “quick tip” and I’m not so much interested in “quick tips” but more about general theory of EQ.

As you know, the EQ’s in Cubase all feature note based methods for setting frequencies, so I was wondering if there are some typical ways engineers use this.

Anyway, of course, none of this ignores the interativity of – levels, EQ, compression, effects, panning, coffee and so on – all elements of the total package, but good EQ practices brings to much to the final sound.

In theory, I could see setting up an EQ Bus or might even work with automated EQs within a track or on a mix. I need to make better use of the spectrum analyzer, too, I think. But that’s getting into a different topic.

Thanks for the thoughts. Take care for now.