i am a hobbiest. I compose music to keep me busy. My effort is mostly getting a song so that it sounds OK and my friends and family are mostly my audience. I want a product which is listenable and nowhere near professional.
For the last 2 decades (or maybe 3), the only mixing i do is to make sure the output is not clipping and the sound is not too muddy. So its basically a bit of EQ, effects and panning.
I started to follow some mixing courses , and i don’t want to spend hours with compressors and other processes like the pros do.
But, i want to take it to the next level, but still not want to take a lot of time and effort.
I think I my mixing process will proceed as follows (although i will still youtube this stuff and maybe add some more advanced mixing techniques.
Maybe you can tell me if my plan is OK, or maybe you can suggest another way.
I solo one instrument at a time, and work on the panning, volume , effects and EQ.
If i feel that i have 2 instruments which may compete with each other, i may solo both of those tracks and tweak them together to balance the sound.etc
i will finally unmute everything, and using the mix console, listen to the whole song, and adjust the instruments so that the whole things sounds good.
No professional would spend hours tweaking a compressor. There’s no time for that in a real project with real deadlines.
I would not recommend that at all.
Mixing is the art of combining multiple audio sources into something pleasurable and homogeneous. The reason we use EQ and compression, tweak volume levels and pan position, is to get all the tracks to work together. Therefore you need to hear all the tracks playing when making these adjustments. It is a game of balance. If you were to solo individual channels on a commercial recording you might be surprised how different the individual instruments sound in isolation.
The only time I work on a channel in solo is when there is an obvious issue with that part. (Maybe a low rumble or an annoying resonant frequency that requires more surgical precision.)
Well, that is probably not going to work out… As with learning an instrument (or anything really), there are no shortcuts and no quick fixes. Either you put the time in and practice or you won’t get better, or only very very slowly (I’m also talking to myself here )
when going to Youtube, I’d recommend staying away from videos a la “10 thing you should do/not do to your mixes”, “top 5 mixing tips” or similar. There can be some nice tips in there, but they don’t teach you mixing.
My recommendation for a free Youtube tutorials is always “Mixing with Mike’s Fundementals of Mixing”. It is very thorough and teaches the concepts of mixing as well as the tools. But yeah, it takes time.
The title of the first video says it all: Why is mixing so difficult?
I strongly agree with @mlindeb that you shouldn’t do this. Mixing only works in context. Solo is best used to briefly listen to some aspect of the sound, for example sibilance, to inform you about what’s going on. But most of the time it is best to then make the adjustment or correction within the context of all the sounds in your mix.
You might want to take a look at this recent thread.
Great advice so far in this thread, and the only thing I’d add is that I’ve seen these two beginner mistakes being made quite a bit, so I thought I’d share my take on them:
(1) Overuse of effects: Heavy reverb that makes everything muddy and undefined is often a hallmark of a beginners’ mix. It might sound impressive when soloing a track, but with reverb slathered on everywhere, the mix just becomes an undefined mess. Same thing with preset synth patches that are usually designed to sound impressive when solo’d, but in order to integrate them well into a mix, they often need their built-in effects switched off (or turned down).
(2) Using EQ to boost instead of subtract: It might sound counterintuitive, but to make, say, a bass line stand out, it’s usually all about removing bass content from all other tracks (so that the actual bass isn’t masked), and EQing the actual bass to occupy a narrow, well-defined part of the frequency spectrum. This is a general principle: Think about where each track/instrument “lives” in the frequency spectrum, and then EQ out the stuff that’s not in that “slot” for that instrument/track. That way you’ll get a clear, easily manageable mix that’s well defined.
this helps A LOt ! i originally guilty of all your don’t-do’s lol.
i am now toning down the reverb . and i will more attention to that. funny that photographers like myself have the same issue. when i first learned about HDR (reverb) i applied the processing (reverb,effects…) to the point my pictures were cartoon-like.
as for your second point. i just started to pay attention to the frequency response at the low ends. i now remove the bass of most instruments which may interest with the bass. the result is more balanced and less muddy. the only thing i need to understand better your statement ‘every instrument has its space in the frequency spectrum. maybe you can recommend a video/discussion on that aspect. i’m paying attention to spacial living space (panning across to emulate the ‘stage’ experience. )and now i need to understand the frequency space (if i can call it that. and if i understand that concept .
i only boost EQ to crispen up instruments whixh sound flat (my issue with the horns at the moment )… i have never , but maybe should do some of that EQing in the patch (or: do my horn eq within Halion. I just thought it would be easier to see all processing within the project inspector/channel settings , rather than open the halion window to tweak its settings. Maybe one way is better then the other ?
again, thanks for the detailed Coles Notes (do they still publish those , lol. ) course in mixing. it is so appreciative
Ah, glad you were able to find the analogy to photography!
I don’t have any pointers to videos/discussions about the frequency space topic, but pretty much every serious mixing tutorial will eventually talk about that. Here’s an example to illustrate: Let’s say you have a synth patch that you selected to provide a synth lead sound at the upper end of the frequency spectrum. Most synth patches, esp. presets, will cover the full frequency spectrum, but you just need the high end (say, above 5kHz) to cut through - so you should apply an EQ to that synth sound that cuts anything below 5kHz. You’ll still be able to hear the sound, but it won’t fight with/mask other sounds that have their characteristic tonality below 5kHz.
That would be great. unfortunately , i am using Elements, and this feature is in the PRO version. That’s almost $900 CDN …
not sure if its in the artist version… i am sure that maybe future upgrades may include that in the artist version.
Maybe there is a cheaper plugin for that ?
Using the Channel Comparison is nice because you can see both curves on top of each other in the same Window. But you can basically do the same work by having two EQ Windows open and looking back & forth between them.