Mixing for mobile devices

Since so much music is consumed on a phone these days, it seems wise to include that in my considerations. Honestly, it probably even replaces the time honored “car test,” or at the very least gives it a run for its money.

In this particular thread I’m talking about the mobile device playback itself. Optimizing for Spotify, YouTube, etc. is a different cat to skin (if you happen to be into hairless felines).

I naturally expect to hear a subset of what my studio monitors give me when I listen on my iPhone or iPad. I test both with the speakers and with earbuds. Regarding the latter, I’ve also noticed that a newer set of buds massively hypes the bass compared to a couple of other pair I have, so buds are yet another X factor. The quality from iGizmos can be pretty good, especially compared to some of the crap stereo gear (both home and car) I owned back in the 70s. Still, it’s a different environment than listening on a home stereo.

I’m sure many of you mix with this reality in mind, and I’m interested in your approach to mobile mixes. Do you do separate, dedicated mixes for mobile devices? One single mix that’s good for all scenarios? Perhaps the same mix but a different mastering chain (for those who don’t outsource the mastering process)?

they way I see it you are talking about different things in your posts. Because how your mix sound different in speakers and earbuds etc is dependent on your mix balance etc. Yes plenty of earbuds have a hype bass while some phones have great stereo speakers while other phones have mono speaker that don’t play under a certain hz which leads to no low frequencies (like 808’s). this is about making a mix that works the best on the most enviorments.

but playback from Spotify etc doesn’t necessary = playback from your phone.

If you transfer your final mix to your own phone and play it from your mp3 file it will most likely sound different than on Spotify and Youtube (especially volume wise). Then we are talking about LUFS and other stuff which is a another discussion from hyped earbuds etc.

there is software to help you emulate how your final mix will sound on the different content systems like Spotify, Youtube etc.

https://nugenaudio.com/mastercheck

just something to get your started.

Hey, Glenn.

Thanks for the links, will definitely check them out.

As I mentioned, my initial considerations were not for Spotify, et al as they each compress / mangle in their own way. I’m starting with the reference point of raw mp3 playback on a phone as a baseline. If that’s not good to begin with, it doesn’t matter what the streaming services do to the file.

It sounds like your approach is in keeping with old school ways - go for a single mix that is portable (thus justifying the existence of the venerable NS-10s or their grandchildren).

That’s been my thinking as well, as trying to do different mixes for different playback environments seems like an endless rabbit hole. Just wondering if there were other approaches since it’s the modern day equivalent of most music being played back on old transistor radios and crappy car stereos.

It has always been necessary to adjust your mix to sound good in a variety of environments. Audiophile speakers, car speakers, headphones, desktop speakers, earbuds, etc. all need to be accommodated with a single mix, since you’ll only be delivering a single mix to streaming services or your CD duplicator. Whether or not the audio is stored on the listener’s mobile device or somewhere else doesn’t matter. It’s how they are listening to it that matters, not where the audio is stored.

There was a time when this was absolutely true. Back in the day, recorded music was distributed in a physical format, first on vinyl records, then 8 track / cassette tapes, and eventually CDs. Because it was a physical manufacturing process, it would have been financial insanity to try to release multiple CD versions for different listening environments.

Today, however, a very large chunk of what’s distributed is via a file of some kind, with no physical manufacturing required. Referencing Glenn’s link, there’s a table of the popular streaming channels and their current LUFS requirements. For example, if Soundcloud is -13 LUFS and Apple is -16, a case could be made for rendering two final mixes, optimized for each platform in order to get the most out of the mix without it being crunched by their algorithms. Obviously you’d never go there if you had to manufacture physical media, but since it’s just another render, this becomes an option.

I’m not sure if it’s worth the trouble or not, but things are different than they used to be, so I’m trying to look at things from all angles.

My point is: audio storage technology has changed in recent years, but the fact that your mix will be heard in a variety of listening environments has not. In other words, when you say “play a song on a mobile device”, that’s irrelevant for optimizing your mix. The song that is coming from your mobile device may be played through earbuds, high quality headphones, audiophile speakers, or a car audio system. It’s the listening environment that matters, not the audio data storage technology that is being used.

Your question is: How do people typically deal with this variety of listening environments? The answer for almost all recording musicians is the same as it has always been: Try to optimize your mix to sound good in a variety of listening environments, then submit that single mix to your streaming service distributor.

Is it theoretically possible to submit multiple versions of the same song? Sure, but the answer to your question of what do recording musicians typically do is still: “submit a single mix”. There’s a good reason for that. Submitting multiple mixes for different listening environments would be confusing for everybody, especially listeners. They would almost certainly end up playing the wrong version of the mix for their listening environment :slight_smile:.

P.S. Optimizing your mix to sound good with a variety of listening environments is unrelated to LUFS values. The loudness will only affect normalization.