This article's lead section contains information that is not included elsewhere in the article.(June 2024) |
Stem-mixing is a method of mixing audio material based on creating groups of audio tracks and processing them separately prior to combining them into a final master mix. Stems are also sometimes referred to as submixes, subgroups, or buses.
The distinction between a stem and a separation is rather unclear. Some consider stem manipulation to be the same as separation mastering, although others consider stems to be sub-mixes to be used along with separation mastering. It depends on how many separate channels of input are available for mixing and/or at which stage they are on the way towards reducing them to a final stereo mix.
The technique originated in the 1960s[ citation needed ], with the introduction of mixing boards equipped with the capability to assign individual inputs to sub-group faders and to work with each sub-group (stem mix) independently from the others. The approach is widely used in recording studios to control, process and manipulate entire groups of instruments such as drums, strings, or backup vocals, in order to streamline and simplify the mixing process. Additionally, as each stem-bus usually has its own inserts, sends and returns, the stem-mix (sub-mix) can be routed independently through its own signal processing chain, to achieve a different effect for each group of instruments. A similar method is also utilised with digital audio workstations (DAWs), where separate groups of audio tracks may be digitally processed and manipulated through discrete chains of plugins.
Stem-mastering is a technique derived from stem mixing. Just as in stem-mixing, the individual audio tracks are grouped together, to allow for independent control and signal processing of each stem, and can be manipulated independently from each other. Most of the mastering engineers[ who? ] require music producers to have at least -3db headroom at each individual track before starting stem mastering process. The reason for this is to leave more space in the mix to make the mastered version sound cleaner and louder[ citation needed ]. Even though it is not commonly practiced by mastering studios, it does have its proponents[ who? ].
In audio production, a stem is a group of audio sources mixed together, usually by one person, to be dealt with downstream as one unit. A single stem may be delivered in mono, stereo, or in multiple tracks for surround sound. [1]
In sound mixing for film, the preparation of stems is a common stratagem to facilitate the final mix. Dialogue, music and sound effects, called "D-M-E", are brought to the final mix as separate stems. Using stem mixing, the dialogue can easily be replaced by a foreign-language version, the effects can easily be adapted to different mono, stereo and surround systems, and the music can be changed to fit the desired emotional response. If the music and effects stems are sent to another production facility for foreign dialogue replacement, these non-dialogue stems are called "M&E". [1] [2] [3] The dialogue stem is used by itself when editing various scenes together to construct a trailer of the film; after this some music and effects are mixed in to form a cohesive sequence. [4]
In music mixing for recordings and for live sound, stems are subgroups of similar sound sources. When a large project uses more than one person mixing, stems can facilitate the job of the final mix engineer. Such stems may consist of all of the string instruments, a full orchestra, just background vocals, only the percussion instruments, a single drum set, or any other grouping that may ease the task of the final mix. Stems prepared in this fashion may be blended together later in time, as for a recording project or for consumer listening, or they may be mixed simultaneously, as in a live sound performance with multiple elements. [5] For instance, when Barbra Streisand toured in 2006 and 2007, the audio production crew used three people to run three mixing consoles: one to mix strings, one to mix brass, reeds and percussion, and one under main engineer Bruce Jackson's control out in the audience, containing Streisand's microphone inputs and stems from the other two consoles. [6]
Stems may be supplied to a musician in the recording studio so that the musician can adjust a headphones monitor mix by varying the levels of other instruments and vocals relative to the musician's own input. Stems may also be delivered to the consumer so they can listen to a piece of music with a custom blend of the separate elements.
Audio mixing is the process by which multiple sounds are combined into one or more audio channels. In the process, a source's volume level, frequency content, dynamics, and panoramic position are manipulated or enhanced. This practical, aesthetic, or otherwise creative treatment is done in order to produce a finished version that is appealing to listeners.
A mixing console or mixing desk is an electronic device for mixing audio signals, used in sound recording and reproduction and sound reinforcement systems. Inputs to the console include microphones, signals from electric or electronic instruments, or recorded sounds. Mixers may control analog or digital signals. The modified signals are summed to produce the combined output signals, which can then be broadcast, amplified through a sound reinforcement system or recorded.
Quadraphonic sound – equivalent to what is now called 4.0 surround sound – uses four audio channels in which speakers are positioned at the four corners of a listening space. The system allows for the reproduction of sound signals that are independent of one another.
A recording studio is a specialized facility for recording and mixing of instrumental or vocal musical performances, spoken words, and other sounds. They range in size from a small in-home project studio large enough to record a single singer-guitarist, to a large building with space for a full orchestra of 100 or more musicians. Ideally, both the recording and monitoring spaces are specially designed by an acoustician or audio engineer to achieve optimum acoustic properties.
Multitrack recording (MTR), also known as multitracking, is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete tracks on the same reel-to-reel tape was developed. A track was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized.
A sound editor is a creative professional responsible for selecting and assembling sound recordings in preparation for the final sound mixing or mastering of a television program, motion picture, video game, or any production involving recorded or synthetic sound. The sound editor works with the supervising sound editor. The supervising sound editor often assigns scenes and reels the sound editor based on the editor's strengths and area of expertise. Sound editing developed out of the need to fix the incomplete, undramatic, or technically inferior sound recordings of early talkies, and over the decades has become a respected filmmaking craft, with sound editors implementing the aesthetic goals of motion picture sound design.
Monaural sound or monophonic sound is sound intended to be heard as if it were emanating from one position. This contrasts with stereophonic sound or stereo, which uses two separate audio channels to reproduce sound from two microphones on the right and left side, which is reproduced with two separate loudspeakers to give a sense of the direction of sound sources. In mono, only one loudspeaker is necessary, but, when played through multiple loudspeakers or headphones, identical audio signals are fed to each speaker, resulting in the perception of one-channel sound "imaging" in one sonic space between the speakers. Monaural recordings, like stereo ones, typically use multiple microphones fed into multiple channels on a recording console, but each channel is "panned" to the center. In the final stage, the various center-panned signal paths are usually mixed down to two identical tracks, which, because they are identical, are perceived upon playback as representing a single unified signal at a single place in the soundstage. In some cases, multitrack sources are mixed to a one-track tape, thus becoming one signal. In the mastering stage, particularly in the days of mono records, the one- or two-track mono master tape was then transferred to a one-track lathe used to produce a master disc intended to be used in the pressing of a monophonic record. Today, however, monaural recordings are usually mastered to be played on stereo and multi-track formats, yet retain their center-panned mono soundstage characteristics.
A digital audio workstation is an electronic device or application software used for recording, editing and producing audio files. DAWs come in a wide variety of configurations from a single software program on a laptop, to an integrated stand-alone unit, all the way to a highly complex configuration of numerous components controlled by a central computer. Regardless of configuration, modern DAWs have a central interface that allows the user to alter and mix multiple recordings and tracks into a final produced piece.
A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.
An aux-send is an electronic signal-routing output used on multi-channel sound mixing consoles used in recording and broadcasting settings and on PA system amplifier-mixers used in music concerts. The signal from the auxiliary send is often routed through outboard audio processing effects units and then returned to the mixer using an auxiliary return input jack, thus creating an effects loop. This allows effects to be added to an audio source or channel within the mixing console. Another common use of the aux send mix is to create monitor mixes for the onstage performers' monitor speakers or in-ear monitors. The aux send's monitor mix is usually different from the front of house mix the audience is hearing.
A DJ mixer is a type of audio mixing console used by disc jockeys (DJs) to control and manipulate multiple audio signals. Some DJs use the mixer to make seamless transitions from one song to another when they are playing records at a dance club. Hip hop DJs and turntablists use the DJ mixer to play record players like a musical instrument and create new sounds. DJs in the disco, house music, electronic dance music and other dance-oriented genres use the mixer to make smooth transitions between different sound recordings as they are playing. The sources are typically record turntables, compact cassettes, CDJs, or DJ software on a laptop. DJ mixers allow the DJ to use headphones to preview the next song before playing it to the audience. Most low- to mid-priced DJ mixers can only accommodate two turntables or CD players, but some mixers can accommodate up to six turntables or CD players. DJs and turntablists in hip hop music and nu metal use DJ mixers to create beats, loops and so-called scratching sound effects.
Re-amping is a process often used in multitrack recording in which a recorded signal is routed back out of the editing environment and run through external processing using effects units and then into a guitar amplifier and a guitar speaker cabinet or a reverb chamber. Originally, the technique was used mostly for electric guitars: it facilitates a separation of guitar playing from guitar amplifier processing—a previously recorded audio program is played back and re-recorded at a later time for the purpose of adding effects, ambiance such as reverb or echo, and the tone shaping imbued by certain amps and cabinets. The technique has since evolved over the 2000s to include many other applications. Re-amping can also be applied to other instruments and program, such as recorded drums, synthesizers, and virtual instruments.
There are a number of well-developed microphone techniques used for recording musical, film, or voice sources or picking up sounds as part of sound reinforcement systems. The choice of technique depends on a number of factors, including:
Live sound mixing is the blending of multiple sound sources by an audio engineer using a mixing console or software. Sounds that are mixed include those from instruments and voices which are picked up by microphones and pre-recorded material, such as songs on CD or a digital audio player. Individual sources are typically equalised to adjust the bass and treble response and routed to effect processors to ultimately be amplified and reproduced via a loudspeaker system. The live sound engineer listens and balances the various audio sources in a way that best suits the needs of the event.
A stage monitor system is a set of performer-facing loudspeakers called monitor speakers, stage monitors, floor monitors, wedges, or foldbacks on stage during live music performances in which a sound reinforcement system is used to amplify a performance for the audience. The monitor system allows musicians to hear themselves and fellow band members clearly.
An audio engineer helps to produce a recording or a live performance, balancing and adjusting sound sources using equalization, dynamics processing and audio effects, mixing, reproduction, and reinforcement of sound. Audio engineers work on the "technical aspect of recording—the placing of microphones, pre-amp knobs, the setting of levels. The physical recording of any project is done by an engineer…"
In sound recording and reproduction, audio mixing is the process of optimizing and combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo field are adjusted and balanced. Audio mixing techniques and approaches vary widely and have a significant influence on the final product.
A mixing engineer is responsible for combining ("mixing") different sonic elements of an auditory piece into a complete rendition, whether in music, film, or any other content of auditory nature. The finished piece, recorded or live, must achieve a good balance of properties, such as volume, pan positioning, and other effects, while resolving any arising frequency conflicts from various sound sources. These sound sources can comprise the different musical instruments or vocals in a band or orchestra, dialogue or Foley in a film, and more.
In audio production, a stem is a discrete or grouped collection of audio sources mixed together, usually by one person, to be dealt with downstream as one unit. A single stem may be delivered in mono, stereo, or in multiple tracks for surround sound.
In audio engineering, a bus is a signal path that can be used to combine (sum) individual audio signal paths together. It is typically used to group several individual audio tracks which can be then manipulated, as a group, like another track. This can be achieved by routing the signal physically by ways of switches and cable patches on a mixing console, or by manipulating software features on a digital audio workstation (DAW).