AudioUtils

Audio Normalization: Peak vs Loudness — When to Use Each

Understand peak normalization vs loudness normalization (LUFS). Learn which to use for streaming, broadcasting, podcasting, and music production.

Normalization is one of those audio terms that means two completely different things depending on who is using it. The word covers two unrelated processes — peak normalization and loudness normalization — that measure different properties of the signal, target different goals, and produce different results. Mix them up and you ship audio that is either too quiet, too loud, or inconsistent across platforms. This guide explains what each one actually does, what numbers to target, and how the whole system fits together with broadcast standards, streaming algorithms, and your DAW.

Normalization Is Not Compression

Before going further: normalization and compression are not the same thing, and the words are not interchangeable. Normalization applies a single gain change to the whole file — every sample multiplied by the same constant. Dynamics stay intact. Compression varies gain over time based on input level, reducing the difference between loud and quiet parts. Normalization changes how loud the file is. Compression changes how dynamic the file is. Most modern platforms expect both to happen, in that order: compress (or limit) to control peaks and shape dynamics, then normalize to hit a delivery target.

Peak Normalization: Match the Loudest Sample

Peak normalization scans the file for the highest-amplitude single sample, then scales the whole file so that peak hits a target level — typically 0 dBFS (the maximum a digital signal can carry without clipping) or -1 dBFS for a small safety margin. If the loudest sample sat at -6 dBFS, peak-normalizing to 0 dBFS multiplies every sample by 2.0, lifting the entire waveform 6 dB.

This is fast, predictable, and lossless. It guarantees you are using the full digital headroom. It does not, however, tell you anything about how loud the audio sounds. Two files peak-normalized to the same level can differ by 10–15 dB in perceived loudness depending on dynamic range. A heavily compressed pop master peaked to 0 dBFS sounds enormous; a sparse acoustic recording peaked to 0 dBFS sounds quiet, because the average level is far below the peak.

Use peak normalization when you are preparing source material — bringing a quiet recording up before editing, lining up clip levels in a session, ensuring nothing clips after gain staging. Do not use it as your final delivery target if listeners will hear the file alongside other content.

Loudness Normalization: Match Perceived Loudness in LUFS

Loudness normalization measures the average perceived loudness of the entire file using LUFS — Loudness Units relative to Full Scale — and adjusts gain so the integrated LUFS hits a defined target. The measurement comes from the ITU-R BS.1770 algorithm, which applies K-weighting (a filter approximating how the human ear weights frequencies, with reduced sensitivity below 100 Hz and a slight high-shelf above 2 kHz) and gates out sections more than 10 LU below the running mean so silence does not drag the number down.

EBU R128 is the European broadcast standard built on BS.1770. It defines integrated loudness, short-term loudness (3-second sliding window), momentary loudness (400-millisecond window), loudness range (LRA), and a maximum true peak limit measured in dBTP — true peak being the analog peak that would result from D/A conversion, which can exceed the highest digital sample by 1–3 dB.

The LUFS Targets That Actually Matter

Different delivery contexts target different integrated loudness:

  • EBU R128 broadcast (Europe): -23 LUFS integrated, -1 dBTP true peak ceiling
  • ATSC A/85 broadcast (US TV): -24 LUFS integrated, -2 dBTP true peak
  • Apple Podcasts: -16 LUFS mono / -19 LUFS stereo, -1 dBTP
  • Spotify: -14 LUFS integrated playback target, -1 dBTP for loud masters, -2 dBTP if the master sits below -14 LUFS
  • Apple Music (Sound Check): -16 LUFS
  • YouTube: approximately -14 LUFS playback normalization
  • Amazon Music: -14 LUFS
  • Tidal: -14 LUFS
  • Instagram, TikTok, Reels: roughly -10 to -14 LUFS depending on platform and year

The key insight: streaming services lower loud masters to their target. Mastering a track at -8 LUFS does not make it louder than a -14 LUFS master on Spotify; the platform turns it down by 6 dB on the way out. The loudness war is over from a delivery standpoint. What matters now is whether your master sounds good after platform normalization, which means avoiding squashed dynamics and harsh limiting.

ReplayGain: The Original

Before LUFS became standard, ReplayGain (introduced by David Robinson in 2001) measured perceived loudness using a different psychoacoustic model and stored a gain offset in the file's metadata. Music players read the tag and adjusted playback. ReplayGain 2.0 (2011) was rewritten to use the same EBU R128 measurement as modern LUFS-based systems, with a target of -18 LUFS for tracks and an optional album mode that uses one offset for an entire album so loudness relationships between tracks are preserved. Foobar2000, MusicBee, VLC, and many car stereos still support ReplayGain tags.

A Practical Example with Numbers

You finish mixing a podcast episode. You measure integrated loudness in your DAW or with a tool like ffmpeg's loudnorm filter and see -22.4 LUFS. Apple Podcasts targets -16 LUFS for stereo content. The gap is 6.4 dB. You apply +6.4 dB of gain — but that gain pushes peaks above 0 dBFS, so you precede it with a true-peak limiter set to -1 dBTP. Result: integrated loudness lands near -16 LUFS, peaks stay below the ceiling, and listeners hear consistent volume between your show and others on Apple Podcasts.

For music going to Spotify, master to integrated -14 LUFS with -1 dBTP. For broadcast TV in Europe, master to -23 LUFS with -1 dBTP. For YouTube music videos, anywhere from -14 to -16 LUFS plays consistently — YouTube only lowers loud uploads, never raises quiet ones.

When Normalization Helps vs Hurts

Loudness normalization helps when listeners switch between sources. Spotify users do not want to grab the volume knob between every track; podcast listeners do not want one episode to whisper and the next to shout. Helps.

It hurts when the audio is intentionally dynamic — a film score with quiet introspection and loud climaxes, a classical piano piece, an audio drama with whispers and gunshots. Aggressive normalization to a -14 LUFS target on dynamic content forces the engineer to compress and limit harder than the music wants, which flattens the experience. Streaming platforms increasingly support album-mode normalization to mitigate this, but per-track normalization is still the default.

Batch Tools and DAW Integration

Audacity has Loudness Normalization (Effect menu) targeting -23 LUFS by default with adjustable target. Adobe Audition's Match Loudness feature (Window > Match Loudness) handles batches and uses ITU BS.1770. iZotope RX has a Loudness module. Pro Tools has built-in dialogue loudness metering since version 12. ffmpeg's loudnorm filter (-af loudnorm=I=-16:TP=-1:LRA=11) does a two-pass measurement and applies linear gain, which is the high-quality path. For a batch of files needing simple loudness matching before further processing, the WAV to MP3 converter or FLAC to MP3 tools work fine for delivery, but you should normalize before encoding to lossy formats so the encoder sees a properly leveled signal.

For deeper context on the related concepts, see audio bitrate explained, the bitrate guide by use case, and the audio quality settings reference.