The main working platform has been shifted from the analogue world into digital world now, it is more and more convenient to produce music at home, music maker can have little to no knowledge in production but thanks to the “Normalize” bottom in some DAWs and online auto mastering sites, mastering has never been this easier. So why mastering the music ?
A Little History
The term “Mastering” is often covered mysteriously or completely misunderstood. Without going back to the full history of mastering, long story short mastering is the last step of creative process before the release. What top mastering engineer do is “Turning a collection of songs into an album by making them sound like they belong together in tone, volume and timing” (Owsinski, 2014) Nowadays, to my understanding mastering for none pro audio people is maximizing the songs to make them more appealing and effective for them. Keeping that in mid the loudness is one of the most important key component to a successful song/album.
From Vinyl to CD to Digital Streaming. Mastering Engineer first came into the industry as transfer engineer, just like mixing engineer were called balance engineer. Because back in 1948, the first magnetic tape recorder was commercially introduced by Ampex. Mastering Engineer’s job is to transfer the recording to the vinyl master.
Fast forward to the 1982 when Sony brings the audio world into its first digital stage, The most obvious thing to send to a CD (Compact Disc) mastering is a 16-bit/44.1kHz, but there are various other options, Such as “burning a CD-ROM containing stereo 24-bit WAV or AIFF files. One big advantage of this approach is that you can also send in your files at sample rates other than 44.1kHz, most popularly at 96kHz, so that the mastering engineer gets the best possible chance of retaining audio quality until the very last stage of the proceedings.” (M, Walker, 2003) With the introduction of CD became the primary form of releasing music for consumers during the 1990s, the maximum peak level was no longer limited by the analogue equipment, but was instead encoded digitally with a maximum peak amplitude. It resulted a louder and hotter masters. The main advantage is the increased dynamic range, “with peak levels often hovering around the 0 dB limit and record companies pushing up levels to remain competitive” (M. Henshall, 2012)
Mastering For Online Distribution
Online Streaming service has taken over all other form of platforms, because of its restricted storage and bandwidth of the server, files can only be kept in a relatively reasonable sizes. The main platform I am working on right now, my artists never release any physical copy of their work and the platform they usually work on is NetEase Music. It is a similar platform to Spotify where they offer premium quality mp3s up tp 320 kbs (kilobits in second)
“MP3 files are known for its encoded lossy data compression” (B. Owsinski, 2014)) What is really doing is decreasing the bits to make the file smaller, When people talking about MP3 losing quality is because during the process of data compression, it takes out some audio information that the encoder thinks it is not necessary such as masked signal or lower signal covered by louder signal. Nowadays there are multiple different modes to increase the final sound quality of the encoders.
VBR Mode: Variable bit rate, it is aimed to contain a constant quality of the files while changing the bit rate results a better quality
ABR Mode: Average bit rate, it is similar to VBR but it varies the bit rate around a targeted bit rate.
CBR Mode: Constant bit ate, it only maintains a steady bit rate regardless of the program, outcomes as the lowest quality, but a very predictable file size.
This is very important for a mastering engineer to know because Lossy coding MP3 makes the quality of the mastered music an issue. It is essential to start with the best quality audio since it will be less damaged during the coding process.
Create a great MP3
Add MP3 while bouncing might be easy, but there are more thoughts behind it.
- Obviously starting with the highest quality audio is key, high sample rate and the most bits, although it is easier nowadays for DAW to have 48kHz and 24 bit as standards, it is still the most important part of it.
- Lowering the input level may help a little because masters of MP3 for streaming services isn’s as hot has CD or FLAC files. -1 to -3 dB lower can make a big difference.
- Filter out some top end, there isn’t any particular frequency number but it really depends on the song it self, trust a mastering engineer’s ear. MP3 encoders have difficulties dealing with these high frequencies so rolling them off to trade a better sounding MP3.
- A busy and compacted sounding mix can relatively lose more “punchyness” after the encoding process.
- Avoid hyper-compression, the encoding algorithm can have some dynamic range to calculate.
FLAC stands for free lossless audio codec, it works very similar to a MP3 but it is lossless. It is designed specifically for high quality audio. What makes FLAC file bigger than MP3 is it supports bit depth from 4 to 32, and sample rate from 1HZ to 655,350Hz, the improvement FLAC has is it automatically detects it from the source file, FLAC is the ideal way to deliver a high fidelity audio while maintains a reasonable size.
Ogg Vorbis is an another audio compression format and it is the main competitor to MP3, It differs in several elements: free, open and not patented. It becomes one of the main files format Spotify uses, the other one is AAC. Although sound quality wise it is a more superior than MP3 since it contains more than two channels up to 256 in total and the files size is smaller, it is not as popular as MP3 and it only shows its benefits when it is 192/kbs or higher.
(LUFS Standard for different platform)
The Most common terms to see in mastering is different measurement types. there are RMS, Sample Peak, Lufs, LKFS, True Peak, etc.
- LUFS: LUFS (Loudest Unit relative to Full Scale) The whole system is developed on K-Weightlifting, Rather than measuring loudness by counting samples by samples, it is used in the Euro pean Broadcast Union (EBU) and it has become the new standard. LUFS (Integrated) is determined by the average level over time of a perceived loudness in a song. How LUFS is shown on the measurement is the same as other full scale measurement such as dBFS, The less negative of a number, the higher value it is, for example -14 LUFS is louder than -23 LUFS.
- There are several derive from LUFS such as Momentary LUFS, it is the loudness over time of 400ms, so be expected to see it change rapidly. Short Term LUFS is the loudness over 3 seconds. Range is the value of Loudness range from the highest level to the lowest level.
(Waves WLM (Waves Loudness Meter)
- LKFS: LKFS (Loudness K-Weighted relative to Full Scale) One LKFS equals ONE dB, it is used in a different system as LUFS, LKFS can be found in “ITU BS. 1770 Standard and the ATSC A/85 Standard.” (TC Electronic, 2013)
- RMS: Root Meter Square, it is more of an “old school” reading of an average loudness using VU meters, which is on the analogue consoles and outboard gears. It is designed to give a average output level over a short term time. The biggest difference between RMS and LUFS is LUFS is based on a specified frequency weighting on a hull scale.
- Sample Peak: Sample peak is the highest digital peak, and there is Inter Sample Peak (ISP) it is a reconstruction filter functioning to roll off the stepped digital signal. I tis the most accurate meter on digital DAW to avoid clipping.
- True Peak: It is the highest point the analog signal reaches, Nowadays people are commonly working on a digital based system, all digital signal has to be converted to an analog signal to be heard, the true peak is a much more sensible metric for the peak level of a waveform that’s why it is still remained relevant till this day.
Sending masters to Spotify
(Spotify Mastering Policies, 2018)
How does Spotify calculate loudness is though ReplayGain, they started using it since day one. ReplayGain is a software that does a technique to achieve the same perceived playback loudness of audio files. It has an algorithm to measure the perceived loudness of audio data which allows the loudness of each song within a collection of songs to be consistent. However the algorithm doesn’t specify an exact measurement unit for loudness so they are planning to change to ITU 1770 in the future, which is developed by “International Telecommunication Union”(Spotify 2018)
How Spotify adjust the loudness is to transcode all the delivery formats, commonly WAVs and FLAC in to Ogg/Vorbis and AAC, meanwhiles it measure the loudness. so it will apply either negative gain or positive gain in order keep up all songs at the similar level. This Process is usually called “Nomalization”
- Negative Gain: It will be applied to a louder masters until it reaches -14 dB LUFS, it lower the volume without any distortion occurs
- Positive Gain: A limiter will be applied with a 5ms Attack and 100ms Release fixed value, the threshold will raise the level until it reaches -14 dB LUFS. They try their best to prevent the nomalization process from distorting or clipping yet maintains its dynamic.
To sum up, In my Case, I will be targeting at -2 True Peak, -14 LUFS in my mastered tracks and it won’t get normalized by the Spotify platform, more information on mastering session in Protools coming on Part 2
What is Loudness Standard and Why is it important
Today, the most fundamental audio problem is the control of loudness. Every day, millions of people adjust their volume controls on their devices over and over on different platforms. Music recordings from the past often appear to be significantly different to each other from genre to genre of recordings. On the other hand, in a television context and commercials are generally much louder than formats like film, dramas or newscasts.
“The FCC (The Federal Communications Commission) hasn’t make any regulatory distinctions between the sound levels of commercials and the sound levels of TV programs.” (L, Miller, 2016) The peak levels of commercials don’t usually overpass the peak levels of TV programming, but the experience is similar to having a flashbulb go on and off compares to a spotlight keeps shining all the time. The TV Ads are like the flashing bulbs. In other words, it is a use of “Contrast”
Since the early days of digital audio, the most common way of determining the level of a given piece of audio has been to measure on a sample-peak level. However, this method is easily deceived and in the effort to appear louder than its competitors, many producers and mastering engineers have found it necessary to use excessive amounts of compression, limiting and maximization, which not only make audio highly inconsistent in terms of loudness, it also compromises the quality of the program material significantly, results in an unsatisfying product.
What Happened If there is no Loudness Standard ?
It is already happening in China, “NetEase Music” which is the equivalent of Spotify does not have a loudness standard for their platform, none of the Chinese online music streaming sites have one. it causes massive issues. The bedroom producers will send their music to the platform and get normalized with a limiter, it is built similar to Spotify, but what differentiate itself is only having a “Brickwall Limiter” its function is having a threshold to limit all the clipping signals. Therefore the music that has been released on the platform are either to quiet or have been normalized twice, one being bounced out of DAW, the other one being limiter on the platform. Only the artists with a good production will have proper masters on the platform. It results in a very unprofessional and laughable music scene in China and nobody even talks about it.