Fader creep

Last updated

Fader creep is a colloquial term used in audio recording to describe a tendency for sound engineers to raise the gain of individual channels on a mixing console, rather than lowering others, to achieve a desired change or fix perceived problems in the mix. [1] For example, an engineer might compensate for a particularly loud drum track by raising the volumes of the voice, the guitar, and the piano to the point where all of the individual signals are competing for headroom. Fader creep may also occur in audio mixing for live concerts.

Fader creep can be a particular problem in audio mixing sessions for multi-track recordings, where individual sounds held on separate audio tracks, or delivered by outboard MIDI or computer audio equipment are combined into the final stereo presentation of the recording. The faders (potentiometers that operate by sliding up or down) or volume controls (rotary potentiometers) on the mixing board or audio processor gradually "creep" toward the maximum volume setting, which reduces the ability to manipulate the relative volumes between channels. This can also result in clipping or distortion of the master mix, which is when the overall volume of sound is too great for the equipment or recording medium intended to hold it. [2]

Common causes of fader creep include ear fatigue, or the diminishing of the ability for the human ear to hear clearly after prolonged exposure to loud sounds, which can reduce the ability of the sound engineer to hear the individual components of the mix accurately. It may also occur if the master fader or monitor's levels are too low and an engineer attempts to address this by setting individual faders higher. [3]

Related Research Articles

<span class="mw-page-title-main">Binaural recording</span> Method of recording sound

Binaural recording is a method of recording sound that uses two microphones, arranged with the intent to create a 3D stereo sound sensation for the listener of actually being in the room with the performers or instruments. This effect is often created using a technique known as dummy head recording, wherein a mannequin head is fitted with a microphone in each ear. Binaural recording is intended for replay using headphones and will not translate properly over stereo speakers. This idea of a three-dimensional or "internal" form of sound has also translated into useful advancement of technology in many things such as stethoscopes creating "in-head" acoustics and IMAX movies being able to create a three-dimensional acoustic experience.

<span class="mw-page-title-main">Headphones</span> Device placed near the ears that plays sound

Headphones are a pair of small loudspeaker drivers worn on or around the head over a user's ears. They are electroacoustic transducers, which convert an electrical signal to a corresponding sound. Headphones let a single user listen to an audio source privately, in contrast to a loudspeaker, which emits sound into the open air for anyone nearby to hear. Headphones are also known as earphones or, colloquially, cans. Circumaural and supra-aural headphones use a band over the top of the head to hold the speakers in place. Another type, known as earbuds or earpieces, consists of individual units that plug into the user's ear canal. A third type are bone conduction headphones, which typically wrap around the back of the head and rest in front of the ear canal, leaving the ear canal open. In the context of telecommunication, a headset is a combination of a headphone and microphone.

<span class="mw-page-title-main">Mixing console</span> Device used for audio mixing

A mixing console or mixing desk is an electronic device for mixing audio signals, used in sound recording and reproduction and sound reinforcement systems. Inputs to the console include microphones, signals from electric or electronic instruments, or recorded sounds. Mixers may control analog or digital signals. The modified signals are summed to produce the combined output signals, which can then be broadcast, amplified through a sound reinforcement system or recorded.

<span class="mw-page-title-main">Recording studio</span> Facility for sound recording

A recording studio is a specialized facility for recording and mixing of instrumental or vocal musical performances, spoken words, and other sounds. They range in size from a small in-home project studio large enough to record a single singer-guitarist, to a large building with space for a full orchestra of 100 or more musicians. Ideally, both the recording and monitoring spaces are specially designed by an acoustician or audio engineer to achieve optimum acoustic properties.

<span class="mw-page-title-main">Dynamic range compression</span> Audio signal processing operation

Dynamic range compression (DRC) or simply compression is an audio signal processing operation that reduces the volume of loud sounds or amplifies quiet sounds, thus reducing or compressing an audio signal's dynamic range. Compression is commonly used in sound recording and reproduction, broadcasting, live sound reinforcement and some instrument amplifiers.

Fantasound was a sound reproduction system developed by engineers of Walt Disney studios and RCA for Walt Disney's animated film Fantasia, the first commercial film released in stereo.

<span class="mw-page-title-main">Sound reinforcement system</span> Amplified sound system for public events

A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.

<span class="mw-page-title-main">Noise gate</span> Audio processing device

A noise gate or simply gate is an electronic device or software that is used to control the volume of an audio signal. Comparable to a compressor, which attenuates signals above a threshold, such as loud attacks from the start of musical notes, noise gates attenuate signals that register below the threshold. However, noise gates attenuate signals by a fixed amount, known as the range. In its simplest form, a noise gate allows a main signal to pass through only when it is above a set threshold: the gate is "open". If the signal falls below the threshold, no signal is allowed to pass : the gate is "closed". A noise gate is used when the level of the "signal" is above the level of the unwanted "noise". The threshold is set above the level of the "noise", and so when there is no main "signal", the gate is closed.

A variable-gain (VGA) or voltage-controlled amplifier (VCA) is an electronic amplifier that varies its gain depending on a control voltage.

<span class="mw-page-title-main">Fade (audio engineering)</span> Gradual change in level of audio signal

In audio engineering, a fade is a gradual increase or decrease in the level of an audio signal. The term can also be used for film cinematography or theatre lighting in much the same way.

<span class="mw-page-title-main">Headphone amplifier</span>

A headphone amplifier is a low-powered audio amplifier designed particularly to drive headphones worn on or in the ears, instead of loudspeakers in speaker enclosures. Most commonly, headphone amplifiers are found embedded in electronic devices that have a headphone jack, such as integrated amplifiers, portable music players, and televisions. However, standalone units are used, especially in audiophile markets and in professional audio applications, such as music studios. Headphone amplifiers are available in consumer-grade models used by hi-fi enthusiasts and audiophiles and professional audio models, which are used in recording studios.

<span class="mw-page-title-main">Aux-send</span> Electronic signal-routing output

An aux-send is an electronic signal-routing output used on multi-channel sound mixing consoles used in recording and broadcasting settings and on PA system amplifier-mixers used in music concerts. The signal from the auxiliary send is often routed through outboard audio processing effects units and then returned to the mixer using an auxiliary return input jack, thus creating an effects loop. This allows effects to be added to an audio source or channel within the mixing console. Another common use of the aux send mix is to create monitor mixes for the onstage performers' monitor speakers or in-ear monitors. The aux send's monitor mix is usually different from the front of house mix the audience is hearing.

<span class="mw-page-title-main">Loudness war</span> Increasing levels in recorded music

The loudness war is a trend of increasing audio levels in recorded music, which reduces audio fidelity and—according to many critics—listener enjoyment. Increasing loudness was first reported as early as the 1940s, with respect to mastering practices for 7-inch singles. The maximum peak level of analog recordings such as these is limited by varying specifications of electronic equipment along the chain from source to listener, including vinyl and Compact Cassette players. The issue garnered renewed attention starting in the 1990s with the introduction of digital signal processing capable of producing further loudness increases.

<span class="mw-page-title-main">Live sound mixing</span> Blending of multiple sound sources for a live event

Live sound mixing is the blending of multiple sound sources by an audio engineer using a mixing console or software. Sounds that are mixed include those from instruments and voices which are picked up by microphones and pre-recorded material, such as songs on CD or a digital audio player. Individual sources are typically equalised to adjust the bass and treble response and routed to effect processors to ultimately be amplified and reproduced via a loudspeaker system. The live sound engineer listens and balances the various audio sources in a way that best suits the needs of the event.

<span class="mw-page-title-main">In-ear monitor</span> Audio earpiece commonly used in live music and television

In-ear monitors, or simply IEMs or in-ears, are devices used by musicians, audio engineers and audiophiles to listen to music or to hear a personal mix of vocals and stage instrumentation for live performance or recording studio mixing. They are also used by television presenters to receive vocal instructions, information and breaking news announcements from a producer that only the presenter hears. They are often custom-fitted to an individual's ears to provide comfort and a high level of noise reduction from ambient surroundings. Their origins as a tool in live music performance can be traced back to the mid-1980s.

A re-recording mixer in North America, also known as a dubbing mixer in Europe, is a post-production audio engineer who mixes recorded dialogue, sound effects and music to create the final version of a soundtrack for a feature film, television program, or television advertisement. The final mix must achieve a desired sonic balance between its various elements, and must match the director's or sound designer's original vision for the project. For material intended for broadcast, the final mix must also comply with all applicable laws governing sound mixing.

<span class="mw-page-title-main">Stage monitor system</span> Sound reinforcement for performers

A stage monitor system is a set of performer-facing loudspeakers called monitor speakers, stage monitors, floor monitors, wedges, or foldbacks on stage during live music performances in which a sound reinforcement system is used to amplify a performance for the audience. The monitor system allows musicians to hear themselves and fellow band members clearly.

<span class="mw-page-title-main">Audio mixing (recorded music)</span> Audio mixing to yield recorded sound

In sound recording and reproduction, audio mixing is the process of optimizing and combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo field are adjusted and balanced. Audio mixing techniques and approaches vary widely and have a significant influence on the final product.

A mixing engineer is responsible for combining ("mixing") different sonic elements of an auditory piece into a complete rendition, whether in music, film, or any other content of auditory nature. The finished piece, recorded or live, must achieve a good balance of properties, such as volume, pan positioning, and other effects, while resolving any arising frequency conflicts from various sound sources. These sound sources can comprise the different musical instruments or vocals in a band or orchestra, dialogue or foley in a film, and more.

<span class="mw-page-title-main">Equalization (audio)</span> Changing the balance of frequency components in an audio signal

Equalization, or simply EQ, in sound recording and reproduction is the process of adjusting the volume of different frequency bands within an audio signal. The circuit or equipment used to achieve this is called an equalizer.

References

  1. Borwick, John, ed. (1980). Sound Recording Practice: A Handbook. Oxford University Press. p. 293. ISBN   978-0-19-311920-8 . Retrieved 2 February 2024.
  2. Ashcroft, Fran (21 February 2023). The Analogue Approach to Digital Recording and Mixing. The Crowood Press. ISBN   978-0-7198-4177-4 . Retrieved 2 February 2024.
  3. Jones, Hollin. "Mixing While Recording or Producing: When To Do It & When Not To". Ask.audio. Retrieved 2 February 2024.