Need quick guide to normalize different backing tracks to play live

Piing

Axe-Master
I've just googled, and there is so many different information that I don't know which path to take. I've just watched a video on how to use Waves WLM Plus loudness meter, and I'm overloaded.

I have many Waves mastering plugins, including WLM Plus, and Ozone 10

Load them all at the same DAW project and apply Waves VLM? Or maybe it would be easier with one of the many Ozone 10 mastering pre-cooked presets?
 
Last edited:
Having gone through multiple cycles of backing track refreshes over the years (playing rock covers), some things we've found:

It's really hard to just use stock limiters and mastering tools to make them all sound consistent for live play, particularly if you're dealing with a variety of genres. This makes sense if you consider you're creating something so that the complete live versions (i.e. the BT plus your live instruments & vox) is what you're really trying to 'master' for consistency. (In contrast Ozone will just master the BT instrumentation subset in your DAW.) Thus it ultimately comes down to ears... mixing and mastering against some BT you like as a benchmark, making them 'sound about the same'. Then follow up with recordings of the live mix from gigs, review while listening for any unwanted shifts in dynamics between songs, and tweak the levels of the BTs that might be too strong or weak.

Less overall compression and limiting seems better for BTs, since it sounds better live if you avoid limiting the dynamic range of the BT instruments (so it's more like live instrumentation, not listening to a CD or stream). We just use just a light touch limiter in the DAW. (I also use Ozone but only for complete originals, not for BTs.)

In our case we use BT for drums and bass... so we start there in creating the BT's using the same kits and patches for those parts so it won't sound like we got a new drummer for every song. Also we really focus on kick, snare, and bass making sure that's clean, prominent, and consistent. (e.g. a little sidechain compression on the bass triggered by kick hits really helps the PA reproduce those kick thumps.)
 
Any DAW has a Normalize function built in, I think.
They do, but using them properly seems to be an art by itself



https://emastered.com/blog/audio-normalization

For backing tracks, would it be better to use Peak Normalization or Loudness Normalization?

I don't even know which standard normalization level to choose. e.g.:

Spotify: -14 LUFS
Apple Music: -16 LUFS
Amazon Music: -9 to -13 LUFS
Youtube: -13 to -15 LUFS
Deezer: -14 to -16 LUFS
CD: -9 LUFS
Soundcloud: -8 to -13 LUFS
 
Last edited:
Having gone through multiple cycles of backing track refreshes over the years (playing rock covers), some things we've found:

It's really hard to just use stock limiters and mastering tools to make them all sound consistent for live play, particularly if you're dealing with a variety of genres. This makes sense if you consider you're creating something so that the complete live versions (i.e. the BT plus your live instruments & vox) is what you're really trying to 'master' for consistency. (In contrast Ozone will just master the BT instrumentation subset in your DAW.) Thus it ultimately comes down to ears... mixing and mastering against some BT you like as a benchmark, making them 'sound about the same'. Then follow up with recordings of the live mix from gigs, review while listening for any unwanted shifts in dynamics between songs, and tweak the levels of the BTs that might be too strong or weak.

Less overall compression and limiting seems better for BTs, since it sounds better live if you avoid limiting the dynamic range of the BT instruments (so it's more like live instrumentation, not listening to a CD or stream). We just use just a light touch limiter in the DAW. (I also use Ozone but only for complete originals, not for BTs.)

In our case we use BT for drums and bass... so we start there in creating the BT's using the same kits and patches for those parts so it won't sound like we got a new drummer for every song. Also we really focus on kick, snare, and bass making sure that's clean, prominent, and consistent. (e.g. a little sidechain compression on the bass triggered by kick hits really helps the PA reproduce those kick thumps.)

Good tips about not compressing and limiting, to make them more lively.

One if my issues is that some tracks have drums and bass but others not, because there will be e-drums and bass guest players (these backing tracks only have keyboards, acoustic guitar, doubled guitars and some backing vocals). That makes the normalization a little bit challenging There is the risk of increasing their level too much, or to make them too low compared with the tracks that have drum/bass, and therefore difficult to follow.
 
Last edited:
One if my issues is that some tracks have drums and bass but others not, because there will be e-drums and bass guest players (these backing tracks only have keyboards, acoustic guitar and doubled guitars). That makes the normalization a little bit challenging There is the risk of increasing their level too much, or to make them too low compared with the tracks with drum/bass, and therefore difficult to follow.
Yes, that's a tough challenge! You may have to make best effort for the first gig or two, but then focus on reviewing post-gig recordings to make iterative tweaks for better balance.
 
In Logic just select the Normalize checkbox when bouncing and it's being done (loudest peaks will be be 0dBfs). That's normalizing, not compressing or limiting.
 
Yes, that's a tough challenge! You may have to make best effort for the first gig or two, but then focus on reviewing post-gig recordings to make iterative tweaks for better balance.

For me, I have assigned an expression pedal to a Vol Block at the end of the chain of my FM3 presets (Between 8 an 10). By default I have it in the middle position, so I can balance my volume up and down on-the-fly if neccesary. But I am concerned about the bass, e-drumer and singers.

The bass player doesn't even have an amp, he plugs direct to PA. We have already rehearsed, and it is really deep and raw 😄 I am thinking about lending him my SY-1000, to process it through COMP, AMP, EQ and CAB.
 
Last edited:
In Logic just select the Normalize checkbox when bouncing and it's being done (loudest peaks will be be 0dBfs). That's normalizing, not compressing or limiting.
I've tried that (with Cakewalk) but the styles and the contents of the backing tracks is so different that on the rehearsal we had to stop and re-adjust.
 
  • Like
Reactions: yek
For backing tracks, would it be better to use Peak Normalization or Loudness Normalization?

If the tracks vary in terms of arrangements and mixes, then you'll probably need to normalize by LUFS, not peak dB. Ozone can help with that. Otherwise, it can be a tedious task if you've got a lot of tracks. However, that will only make the loudness consistent across your backing tracks. From what you're saying the inconsistency issue may be due to the style, not the loudness.
 
An analogy: Imagine giving a mastering engineer a set of tracks with the caveat "BTW they don't have all the parts yet, but please master them so they'll sound compatible when we add those other parts later". That's why I don't know any better way than to use ears against a benchmark reference to get started, then iterate based on critical listening to live recordings (or like you're doing real-time during rehearsals). Normalizing may handle any gross variations, but then it's hands on to really bring them in line.
 
In Logic just select the Normalize checkbox when bouncing and it's being done (loudest peaks will be be 0dBfs). That's normalizing, not compressing or limiting.
While this is true for peaks, it does not take into account RMS values, i.e. the "apparent loudness", which can vary dramatically, especially between differing genres etc.

There's no magic bullet as all tracks are unique, so you need to use your ears and the relevent tools such as compression and limiting (EQ and dynamic EQ as well...). I'd suggest a proper mastering oriented plugin such as Isotope's Ozone 11 or an equivalent to get the job done as easily as possible.
 
Last edited:
You really don't want to do that.

If everything plays back at the same level, the set is boring. Period.

The right way to do this is to load everything in a DAW on one long timeline and then adjust each one to how you want them to sound and feel.

Taking an extreme example....there's absolutely no reason Imagine by John Lennon and Spit it Out by Slipknot should play at the same volume. None. No reason. And, there's no excuse for doing it.

Okay...your set almost certainly doesn't have that much variation in styles in it. But, hopefully you get the point. Whatever the climax of your set is, it should be louder than the chill point.

Making everything play at the same level is amateur hour.

Yes, that applies to streaming too. And DJs. And the radio. And stage performers. And albums. And everybody else.

Normalization was a mistake. Period. There's no way to do this but to do it manually using your own taste and judgement.

ETA: Yes, I know the people who started the streaming normalization thing were trying to get away from the loudness wars and stop people from having to use their volume controls as much. Their hearts were in the right place. But...it didn't work out that way in practice. Album-level normalization (even when a track is played alone) is closer to realistic (which I think is what Tidal does). But, it doesn't actually solve the problem. I end up using my volume control more since normalization became a thing than I did before. Before that, it was really just classical vs. everything else...now, it's every damn song.
 
Last edited:
You really don't want to do that.

If everything plays back at the same level, the set is boring. Period.

The right way to do this is to load everything in a DAW on one long timeline and then adjust each one to how you want them to sound and feel.

Taking an extreme example....there's absolutely no reason Imagine by John Lennon and Spit it Out by Slipknot should play at the same volume. None. No reason. And, there's no excuse for doing it.

Okay...your set almost certainly doesn't have that much variation in styles in it. But, hopefully you get the point. Whatever the climax of your set is, it should be louder than the chill point.

Making everything play at the same level is amateur hour.

Yes, that applies to streaming too. And DJs. And the radio. And stage performers. And albums. And everybody else.

Normalization was a mistake. Period. There's no way to do this but to do it manually using your own taste and judgement.

ETA: Yes, I know the people who started the streaming normalization thing were trying to get away from the loudness wars and stop people from having to use their volume controls as much. Their hearts were in the right place. But...it didn't work out that way in practice. Album-level normalization (even when a track is played alone) is closer to realistic (which I think is what Tidal does). But, it doesn't actually solve the problem. I end up using my volume control more since normalization became a thing than I did before. Before that, it was really just classical vs. everything else...now, it's every damn song.

But the problem is that the bass player and the i-drum player play all the songs at the same level. They are amateurs without much awareness. Hence my need to equalize the levels.
 
Last edited:
e.g.: we move from Kiss Me (Sixpence None the Richer) to It's my Life (Bon Jovi) to Smooth (Santana) to Bad Romance (Lady Gaga) or Stop This Train (John Mayer). Backing tracks are the multi-tracks from karaoke-version.com

It is nothing serious. Just the Staff Day party at my workplace with different employees singing different songs that they have chosen. I just want to learn how to do it better
 
Last edited:
They do, but using them properly seems to be an art by itself



https://emastered.com/blog/audio-normalization

For backing tracks, would it be better to use Peak Normalization or Loudness Normalization?

I don't even know which standard normalization level to choose. e.g.:

Spotify: -14 LUFS
Apple Music: -16 LUFS
Amazon Music: -9 to -13 LUFS
Youtube: -13 to -15 LUFS
Deezer: -14 to -16 LUFS
CD: -9 LUFS
Soundcloud: -8 to -13 LUFS

This!

Use a loudness plugin and get them to the same LUFS.
 
Normalization doesn't quite work for me. Normalizing Backing Tracks doesn't necessarily fix apparent loudness variation.. Even within the same genres.

What I do that is pretty consistent in my overall loudness is to choose a main reference track. I use "Never Gonna Give You Up" by Rick Astley, but you can use whatever preferred reference. Be careful with older tunes as they are mixed a lot lower than today's standards.

I use headphones and switch between my reference track and the target tracks. I vary the listening volume at medium and then at a bit louder to compare. Even a bit lower may help to hear the differences.

In addition I also set the tracks of both files at the same zoom level, height and width. I then look at the middle section of the waveform as they bunch up and they give an average height. Don't use the taller waveform peaks. I then look at my reference tracks average height at the center of the waveform and I lower or raise my target tracks level to match.

I do use the limiter plug-in if the peaks go a bit above 0db. Not a good idea to limit and squash peaks if the level is significantly over 0db.

I've tried to get these to match as close as I could within the 0db threshold but I end up with tracks significantly lower than standard mixes. Maybe it's a good thing as it's simply a matter of adjusting the gain levels on the mixer higher.

So far and for several years it's been working great in outdoor and indoor venues for all genres.
 
I would try the Ozone10 maximizer set LUFs to -11 and limiter to -1 dB

Ozone11 just released purportedly will actually allow you to go in and tweak things like vocals, drums and bass specifically, in a mastered stereo mix.

(I really like the IRC IV transient limiter, but I’ve not done what you’re trying)
 
Back
Top Bottom