In sound recording and reproduction, audio mixing is the process of optimizing and combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo (or surround) field are adjusted and balanced. : 11, 325, 468 Audio mixing techniques and approaches vary widely and have a significant influence on the final product.
Audio mixing techniques largely depend on music genres and the quality of sound recordings involved.The process is generally carried out by a mixing engineer, though sometimes the record producer or recording artist may assist. After mixing, a mastering engineer prepares the final product for production.
Audio mixing may be performed on a mixing console or in a digital audio workstation.
In the late 19th century, Thomas Edison and Emile Berliner developed the first recording machines. The recording and reproduction process itself was completely mechanical with little or no electrical parts. Edison's phonograph cylinder system utilized a small horn terminated in a stretched, flexible diaphragm attached to a stylus which cut a groove of varying depth into the malleable tin foil of the cylinder. Emile Berliner's gramophone system recorded music by inscribing spiraling lateral cuts onto a vinyl disc.
Electronic recording became more widely used during the 1920s. It was based on the principles of electromagnetic transduction. The possibility for a microphone to be connected remotely to a recording machine meant that microphones could be positioned in more suitable places. The process was improved when outputs of the microphones could be mixed before being fed to the disc cutter, allowing greater flexibility in the balance.
Before the introduction of multitrack recording, all sounds and effects that were to be part of a recording were mixed simultaneously during a live performance. If the recorded mix was not satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. The introduction of multi-track recording changed the recording process into one that generally involves three stages: recording, overdubbing, and mixing.
Modern mixing emerged with the introduction of commercial multi-track tape machines, most notably when 8-track recorders were introduced during the 1960s. The ability to record sounds into separate channels made it possible for recording studios to combine and treat these sounds not only during recording, but afterward during a separate mixing process.
The introduction of the cassette-based Portastudio in 1979 offered multi-track recording and mixing technology that did not require the specialized equipment and expense of commercial recording studios. Bruce Springsteen recorded his 1982 album Nebraska with one, and the Eurythmics topped the charts in 1983 with the song "Sweet Dreams (Are Made of This)", recorded by band member Dave Stewart on a makeshift 8-track recorder.In the mid-to-late 1990s, computers replaced tape-based recording for most home studios, with the Power Macintosh proving popular. At the same time, many professional recording studios began to use digital audio workstations or DAWs, first used in the mid-1980s, to accomplish recording and mixing previously done with multitrack tape recorders, mixing consoles, and outboard gear.
A mixer (mixing console, mixing desk, mixing board, or software mixer) is the operational heart of the mixing process.Mixers offer a multitude of inputs, each fed by a track from a multitrack recorder. Mixers typically have 2 main outputs (in the case of two-channel stereo mixing) or 8 (in the case of surround).
Mixers offer three main functionalities.
Mixing consoles can be large and intimidating due to the exceptional number of controls. However, because many of these controls are duplicated (e.g. per input channel), much of the console can be learned by studying one small part of it. The controls on a mixing console will typically fall into one of two categories: processing and configuration. Processing controls are used to manipulate the sound. These can vary in complexity, from simple level controls, to sophisticated outboard reverberation units. Configuration controls deal with the signal routing from the input to the output of the console through the various processes.
Digital audio workstations (DAW) can perform many mixing features in addition to other processing. An audio control surface gives a DAW the same user interface as a mixing console.
Outboard audio processing units (analog) and software-based audio plug-ins (digital) are used for each track or group to perform various processing techniques. These processes, such as equalization, compression, sidechaining, stereo imaging, and saturation are used to make each element as audible and sonically appealing as possible. The mix engineer also will use such techniques to balance the space of the final audio wave; removing unnecessary frequencies and volume spikes to minimize the interference or clashing between each element.
The frequency response of a signal represents the amount (volume) of every frequency in the human hearing range, consisting of (on average) frequencies from 20 Hz to 20,000 Hz (20 kHz.) There are a variety of processes commonly used to edit frequency response in various ways.
The mixdown process converts a program with a multiple-channel configuration into a program with fewer channels. Common examples include downmixing from 5.1 surround sound to stereo,and stereo to mono. Because these are common scenarios, it is common practice to verify the sound of such downmixes during the production process to ensure stereo and mono compatibility.
The alternative channel configuration can be explicitly authored during the production process with multiple channel configurations provided for distribution. For example, on DVD-Audio or Super Audio CD, a separate stereo mix can be included along with the surround mix.Alternatively, the program can be automatically downmixed by the end consumer's audio system. For example, a DVD player or sound card may downmix a surround sound program to stereo for playback through two speakers.
Any console with a sufficient number of mix busses can be used to create a 5.1 surround sound mix, but this may be frustrating if the console is not specifically designed to facilitate signal routing, panning, and processing in a surround sound environment. Whether working in an analog hardware, digital hardware, or DAW mixing environment, the ability to pan mono or stereo sources and place effects in the 5.1 soundscape and monitor multiple output formats without difficulty can make the difference between a successful or compromised mix.Mixing in surround is very similar to mixing in stereo except that there are more speakers, placed to surround the listener. In addition to the horizontal panoramic options available in stereo, mixing in surround lets the mix engineer pan sources within a much wider and more enveloping environment. In a surround mix, sounds can appear to originate from many more or almost any direction depending on the number of speakers used, their placement and how audio is processed.
There are two common ways to approach mixing in surround. Naturally, these approaches can be combined in any way the mix engineer sees fit.
Recently, a third approach to mixing in surround was developed by surround mix engineer Unne Liljeblad.
An extension to surround sound is 3D sound, used by formats such as Dolby Atmos. Known as object-based sound, this enables additional speakers to represent height channels, with as many as 64 unique speaker feeds.This has application in concert recordings, movies and videogames, and nightclub events.
Dolby Digital, originally synonymous with Dolby AC-3, is the name for a family of audio compression technologies developed by Dolby Laboratories. Called Dolby Stereo Digital until 1995, it is lossy compression. The first use of Dolby Digital was to provide digital sound in cinemas from 35 mm film prints. It has since also been used for TV broadcast, radio broadcast via satellite, digital video streaming, DVDs, Blu-ray discs and game consoles.
A mixing console or mixing desk is an electronic device for mixing audio signals, used in sound recording and reproduction and sound reinforcement systems. Inputs to the console include microphones, signals from electric or electronic instruments, or recorded sounds. Mixers may control analog or digital signals. The modified signals are summed to produce the combined output signals, which can then be broadcast, amplified through a sound reinforcement system or recorded.
Ambisonics is a full-sphere surround sound format: in addition to the horizontal plane, it covers sound sources above and below the listener.
A recording studio is a specialized facility for recording and mixing of instrumental or vocal musical performances, spoken words, and other sounds. They range in size from a small in-home project studio large enough to record a single singer-guitarist, to a large building with space for a full orchestra of 100 or more musicians. Ideally, both the recording and monitoring spaces are specially designed by an acoustician or audio engineer to achieve optimum acoustic properties.
Surround sound is a technique for enriching the fidelity and depth of sound reproduction by using multiple audio channels from speakers that surround the listener. Its first application was in movie theaters. Prior to surround sound, theater sound systems commonly had three screen channels of sound that played from three loudspeakers located in front of the audience. Surround sound adds one or more channels from loudspeakers to the side or behind the listener that are able to create the sensation of sound coming from any horizontal direction around the listener.
Multitrack recording (MTR), also known as multitracking, is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete "tracks" on the same reel-to-reel tape was developed. A "track" was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized.
Monaural or monophonic sound reproduction is sound intended to be heard as if it were emanating from one position. This contrasts with stereophonic sound or stereo, which uses two separate audio channels to reproduce sound from two microphones on the right and left side, which is reproduced with two separate loudspeakers to give a sense of the direction of sound sources. In mono, only one loudspeaker is necessary, but, when played through multiple loudspeakers or headphones, identical signals are fed to each speaker, resulting in the perception of one-channel sound "imaging" in one sonic space between the speakers. Monaural recordings, like stereo ones, typically use multiple microphones fed into multiple channels on a recording console, but each channel is "panned" to the center. In the final stage, the various center-panned signal paths are usually mixed down to two identical tracks, which, because they are identical, are perceived upon playback as representing a single unified signal at a single place in the soundstage. In some cases, multitrack sources are mixed to a one-track tape, thus becoming one signal. In the mastering stage, particularly in the days of mono records, the one- or two-track mono master tape was then transferred to a one-track lathe used to produce a master disc intended to be used in the pressing of a monophonic record. Today, however, monaural recordings are usually mastered to be played on stereo and multi-track formats, yet retain their center-panned mono soundstage characteristics.
Dolby Pro Logic is a surround sound processing technology developed by Dolby Laboratories, designed to decode soundtracks encoded with Dolby Surround. The terms Dolby Stereo and LtRt are also used to describe soundtracks that are encoded using this technique.
Matrix decoding is an audio technology where a small number of discrete audio channels are decoded into a larger number of channels on play back. The channels are generally, but not always, arranged for transmission or recording by an encoder, and decoded for playback by a decoder. The function is to allow multichannel audio, such as quadraphonic sound or surround sound to be encoded in a stereo signal, and thus played back as stereo on stereo equipment, and as surround on surround equipment – this is "compatible" multichannel audio.
A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.
Stereophonic sound, or more commonly stereo, is a method of sound reproduction that recreates a multi-directional, 3-dimensional audible perspective. This is usually achieved by using two independent audio channels through a configuration of two loudspeakers in such a way as to create the impression of sound heard from various directions, as in natural hearing.
Dolby Stereo is a sound format made by Dolby Laboratories. It is a unified brand for two completely different basic systems: the Dolby SVA 1976 system used with optical sound tracks on 35mm film, and Dolby Stereo 70mm noise reduction on 6-channel magnetic soundtracks on 70mm prints.
Professional audio, abbreviated as pro audio, refers to both an activity and a category of high-quality, studio-grade audio equipment. Typically it encompasses sound recording, sound reinforcement system setup and audio mixing, and studio music production by trained sound engineers, audio engineers, record producers, and audio technicians who work in live event support and recording using mixing consoles, recording equipment and sound reinforcement systems. Professional audio is differentiated from consumer- or home-oriented audio, which are typically geared toward listening in a non-commercial environment.
Panning is the distribution of an audio signal into a new stereo or multi-channel sound field determined by a pan control setting. A typical physical recording console has a pan control for each incoming source channel. A pan control or pan pot is an analog control with a position indicator which can range continuously from the 7 o'clock when fully left to the 5 o'clock position fully right. Audio mixing software replaces pan pots with on-screen virtual knobs or sliders which function like their physical counterparts.
A stage monitor system is a set of performer-facing loudspeakers called monitor speakers, stage monitors, floor monitors, wedges, or foldbacks on stage during live music performances in which a sound reinforcement system is used to amplify a performance for the audience. The monitor system allows musicians to hear themselves and fellow band members clearly.
The sweet spot is a term used by audiophiles and recording engineers to describe the focal point between two speakers, where an individual is fully capable of hearing the stereo audio mix the way it was intended to be heard by the mixer. The sweet spot is the location which creates an equilateral triangle together with the stereo loudspeakers, the stereo triangle. In the case of surround sound, this is the focal point between four or more speakers, i.e., the location at which all wave fronts arrive simultaneously. In international recommendations the sweet spot is referred to as reference listening point.
A mixing engineer is responsible for combining ("mixing") different sonic elements of an auditory piece into a complete rendition, whether in music, film, or any other content of auditory nature. The finished piece, recorded or live, must achieve a good balance of properties, such as volume, pan positioning, and other effects, while resolving any arising frequency conflicts from various sound sources. These sound sources can comprise the different musical instruments or vocals in a band or orchestra, dialogue or foley in a film, and more.
Remote recording, also known as location recording, is the act of making a high-quality complex audio recording of a live concert performance, or any other location recording that uses multitrack recording techniques outside of a recording studio. The multitrack recording is then carefully mixed, and the finished result is called a remote recording or a live album. This is in contrast to a field recording which uses few microphones, recorded onto the same number of channels as the intended product. Remote recording is not the same as remote broadcast for which multiple microphones are mixed live and broadcast during the performance, typically to stereo. Remote recording and remote broadcast may be carried out simultaneously by the same crew using the same microphones.
Out of Phase Stereo (OOPS) is an audio technique which manipulates the phase of a stereo audio track, to isolate or remove certain components of the stereo mix. It works on the principle of phase cancellation, in which two identical but inverted waveforms summed together will "cancel the other out".