Most Active Stories
- Jim Skeffington, PawSox President & Prominent Lawyer, Has Died
- Scott MacKay Commentary: MacKay's RIC Commencement Speech
- TGIF: 18 Things to Know About Rhode Island Politics & Media
- Biologists Plan To Continue Tracking Beluga Whales In Narragansett Bay
- Elorza Says Further Steps Needed to Stabilize Providence's Finances
Tue November 6, 2012
TheEC: Multi-band Audio Processing
By Aaron Read
Providence, RI – Audio purists refer to it as the scourge of music, but AUDIO PROCESSING is a critical part of any radio station's transmitter. An audio processor typically accomplishes several goals:
For a lot of stations, it creates discreet left/right audio channels and merges them into a "stereo composite" signal. This puts the left + right audio into the spectrum between 30Hz and 15kHz, a stereo pilot tone at 19kHz (to tell radios there's stereo information in the signal), and then the left minus right audio into the spectrum between 23kHz and 53kHz. The receivers then can mathematically derive the discreet left/right audio channels; L-R is the left channel, and the L-R is phase-reversed, mixed with L+R, and that cancels out the left channel audio, leaving the right channel. Backwards compatibility for mono radios is preserved as mono radios simply ignore the 19kHz stereo pilot and only play the L+R audio.
Audio processors also apply what's known as "pre-emphasis." This is a system of artificially boosting high-end frequencies for transmission, then un-boosting ("de-emphasis") them in the receiver, in order to overcome signal-to-noise problems inherent to stereo broadcasting. Without it, stereo audio sounds very distorted and "muddy" as the high end gets lost in the noise.
But the big thing an audio processor does is loudness control. It takes a diverse array of programming material: music from CD, music from computers, satellite feeds, people talking, etc, and manipulates the loudness levels so that listeners perceive a relatively consistent audio level. Essentially, it makes it so listeners aren't constantly adjusting the volume knob as the programming changes.
Now this is a delicate operation. Stations want to be loud so that listeners can hear the station clearly on any radio, no matter how good or how lousy. But being loud inherently means sacrificing dynamic range, which reduces the "clarity" of the programming. This can be especially noticeable on certain formats like classical music or news/talk, where there's a lot of natural variation in the volume. A poorly-done processing scheme can result in "breathing" where you can literally hear the station quickly, and repeatedly, get louder and softer as the programming changes.
Plus, reduce the dynamic range too much and you cause "listener fatigue" as our brains have to work harder (without us even realizing it) to pay attention to a narrower dynamic range. After an hour or so of listening to an "overprocessed" station, you can literally feel exhausted and not know why!
One way to help is to use "multi-band processing." This is where the processor applies different compression/limiting schemes to different ranges of audio frequencies. Logically, this makes sense: bass is different from midrange is different from treble, after all. In fact, many modern processors can have five, six or even more "bands" of processing because it's all done in the digital domain (i.e. inside a computer) allowing for high precision. With a scheduler and time sync connection, the station can automatically change the processing scheme depending on the time of day! For example: a talk-appropriate scheme for the "morning zoo crew," but a music-appropriate scheme otherwise.
So when you turn on your radio, set that volume where you like it and relax. Because the audio processor will handle it from there!