Would it be of value to add LUFS to audio export?

Because some companies like Spotify (and maybe others…) want the audio file to have a specific LUFS level, would it be interesting to add this to the audio export option? I saw that there is already something called Broadcast ready, maybe this is part of that same category?
After having exported my score as Audio in Dorico, I always open up my audio editor (Twisted Wave) which offers this option:
Screen Shot 2022-01-19 at 14.11.01

This I think is one of the reasons that Dorico includes the Supervision plugin if you’ve found it already to show you LUFS and a lot more.

IMO, just saying “normalize” might be “okay” but wouldn’t likely give you the best result - and if its going out to Spotify I’m guessing you care very much.

I’m not in the camp of “Only mastering engineers should do this!” But I have ruined tracks by having tired ears, stretching to meet a deadline and letting tool X do it without adequate effort on my part.

I don’t think it’s only normalizing what this plugin does. According to the developer it’s analyzing the whole wave first.

Sure, tools X do what they do. The differences between them might be an additional reason to prefer to use them as Dorico currently does as master plugins (Giving you the power of choice and the ability to use all their features) rather than having a certain one built into the export.

Whether you use Tool X or not, I think the workflow would be lots slower if it were on export, as I’d have to export/hear result/change, export/hear result / change.

Normalizing by Loudness is different than usual normalizing (which uses peak-level).

Normalizing by Loudness is used to match different audio files in the perceived loudness, which is dependent on frequency and temporal features of the audio file, as well as level.

Therefore, the whole audiofile must be analyzed, before any loudness normalization can be applied. Loudness is also usually set with a clear target level in mind.

That being said, as an audio engineer who worked and works inside the AES commitee concerning Loudness in Audio-Streaming, I would highly recommend leaving that step to mastering engineers.

There are several things to be considered when dealing with loudness, one of them being if a limiter should be engaged to achieve desired loudness levels, something that I really don’t see in the scope of a purely production software like Dorico. This really is something for mastering, and there are good reasons for that, too.

1 Like

If your audio is for public consumption (Spotify) I wouldn’t use Dorico for final output. At the least you’ll want to normalize and do other mastering tasks like end format, resampling and dithering in a tool such as Wavelab.

On that I don’t really understand the inclusion of Spectravision. Why? Does anybody really want to peer at the waveform while composing? Seems like an odd choice, but maybe it’s marketing as that plugin has been making the rounds.

Speaking again just for me: The reason I want to see it in Dorico is an extension of the concept that it is generally better to fix it at the source than through magic later.

Taking the Histogram functionality for example to make tweaks rather than squashing wave forms or introducing phase with an EQ - If I can.

What is unique I think about sound direct from Notation versus a DAW is that life is too short to live with crappy libraries. Therefore, if you assume any library sound I would use is basically pretty good, then mostly issues are with me in the arrangement or in the dynamics and automation, etc. and I’m better fixing it there first.

Full disclosure, Today I take it out of Dorico at that point as sections. - with the intent that what comes out of Dorico is pretty much broadcast ready. The DAW work is in ensuring the separate stems when summed together at unity will equal the demo product that the supervisor chose in the first place - you are in trouble if they don’t.

That’s just me - and I’m extremely happy to learn.

I’ve tried it a few times, but it doesn’t work well for me to export each track individually, until I’ve got an orchestra’s worth of tracks again and whatever mic positions. Productivity wise, it’s too much like starting over.

Completely agree with this philosophy, which is precisely why I’m so happy that now I can easily do all the MIDI work in Dorico, and leave all the mixing to the DAW.

Would love it if Dorico was a full DAW, but it would have to become Nuendo to have enough capability (multichannel, ATMOS, Ambisonics, bussing, etc)

Speaking as a Sound/Mastering engineer: loudness is the exception.
You want the full available dynamic range, no overloads, no limiting, no clipping distortion. Normalizing to high loudness values easily can introduce these things.
Then, to deliver to different platforms with different loudness standards, you might need to produce several master files.
The best solution is to leave loudness normalization to distributors, such as Spotify or YouTube, but this only works reliably when really everyone is using the same methods and the same loudness target values.
Actually, the real best solution agreed on by the AES committee (you can read it up in the new TD.1008) is to have normalization even later: in the end-user device, like your phone.

TL;DR: leading audio/mastering engineers worldwide think it’s best to put loudness normalization as far back in the chain as possible.

2 Likes

Interesting. I think your earlier statement would still hold though with end devices, that “this only works reliably when really everyone is using the same methods and the same loudness targets”

And if I may spin it even further: I would assert that the content has to be in the best /right position it can be to facilitate that step, which means that I’m looking at certain targets too and needing a tool for that. Would you go for that?

I can see their perspective. I don’t think I can agree entirely, just because for me I’ve heard the difference when I do my job better, am aware and don’t create certain problems in the first place, but I’m not saying there can’t be a step afterwards.

Let me expose a bias I’m probably looking through: A factor in this conversation for me is when supervisors are very picky in what they consider a “genre” to sound like - down to the dullness of certain frequencies or lack of dynamic range sometimes. I quoted the word “genre” because I’m meaning what is expected for say “sad piano cue”. As a human engineer, you and I could just have a conversation about what they want. But to leave it to a distributor or device I think would lose a bunch of gigs, and distributor is problematic anyway for anything exclusive use,

In a sense I never master, since that happens later. In another sense, the demo has to sound mastered, so I guess it is kinda weird.

Shape the sound that you want. Use any tools necessary to get the sound you want. If you want a heavy metal style compression, go for it. If you want sad, compressed piano music, add a compressor. None of this is technically mastering, although this sound was often achieved in the mastering process. But they are artistic choices.
But never aim for a loudness level, just because it’s a Number. It’s a purely technical value, and peaks and transients lost in production just to fit this number cannot be restored afterwards if it will be played back on a lower level. Especially dynamic music (such as orchestra) will suffer from sounding dull if compressed too much compared to an uncompressed version played back at the same loudness level.

1 Like