This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
A sound effect (or audio effect) is an artificially created or enhanced sound, or sound process used to emphasize artistic or other content of films, television shows, live performance, animation, video games, music, or other media. These are normally created with foley. In motion picture and television production, a sound effect is a sound recorded and presented to make a specific storytelling or creative point without the use of dialogue or music. The term often refers to a process applied to a recording, without necessarily referring to the recording itself. In professional motion picture and television production, dialogue, music, and sound effects recordings are treated as separate elements. Dialogue and music recordings are never referred to as sound effects, even though the processes applied to such as reverberation or flanging effects, often are called "sound effects".
The term sound effect ranges back to the early days of radio. In its Year Book 1931 the BBC published a major article about "The Use of Sound Effects". It considers sounds effect deeply linked with broadcasting and states: "It would be a great mistake to think of them as anologous to punctuation marks and accents in print. They should never be inserted into a programme already existing. The author of a broadcast play or broadcast construction ought to have used Sound Effects as bricks with which to build, treating them as of equal value with speech and music." It lists six "totally different primary genres of Sound Effect":
According to the author, "It is axiomatic that every Sound Effect, to whatever category it belongs, must register in the listener's mind instantaneously. If it fails to do so its presence could not be justified."
In the context of motion pictures and television, sound effects refers to an entire hierarchy of sound elements, whose production encompasses many different disciplines, including:
Each of these sound effect categories is specialized, with sound editors known as specialists in an area of sound effects (e.g. a "Car cutter" or "Guns cutter").
Foley is another method of adding sound effects. Foley is more of a technique for creating sound effects than a type of sound effect, but it is often used for creating the incidental real world sounds that are very specific to what is going on onscreen, such as footsteps. With this technique the action onscreen is essentially recreated to try to match it as closely as possible. If done correctly it is very hard for audiences to tell what sounds were added and what sounds were originally recorded (location sound).
In the early days of film and radio, foley artists would add sounds in realtime or pre-recorded sound effects would be played back from analogue discs in realtime (while watching the picture). Today, with effects held in digital format, it is easy to create any required sequence to be played in any desired timeline.
In the days of silent film, sound effects were added by the operator of a theater organ or photoplayer, both of which also supplied the soundtrack of the film. Theater organ sound effects are usually electric or electro-pneumatic, and activated by a button pressed with the hand or foot. Photoplayer operators activate sound effects either by flipping switches on the machine or pulling "cow-tail" pull-strings, which hang above. Sounds like bells and drums are made mechanically, sirens and horns electronically. Due to its smaller size, a photoplayer usually has fewer special effects than a theater organ, or less complex ones.
The principles involved with modern video game sound effects (since the introduction of sample playback) are essentially the same as those of motion pictures. Typically a game project requires two jobs to be completed: sounds must be recorded or selected from a library and a sound engine must be programmed so that those sounds can be incorporated into the game's interactive environment.
In earlier computers and video game systems, sound effects were typically produced using sound synthesis. In modern systems, the increases in storage capacity and playback quality has allowed sampled sound to be used. The modern systems also frequently utilize positional audio, often with hardware acceleration, and real-time audio post-processing, which can also be tied to the 3D graphics development. Based on the internal state of the game, multiple different calculations can be made. This will allow for, for example, realistic sound dampening, echoes and doppler effect.
Historically the simplicity of game environments reduced the required number of sounds needed, and thus only one or two people were directly responsible for the sound recording and design. As the video game business has grown and computer sound reproduction quality has increased, however, the team of sound designers dedicated to game projects has likewise grown and the demands placed on them may now approach those of mid-budget motion pictures.
Some pieces of music use sound effects that are made by a musical instrument or by other means. An early example is the 18th century Toy Symphony. Richard Wagner in the opera Das Rheingold (1869) lets a choir of anvils introduce the scene of the dwarfs who have to work in the mines, similar to the introduction of the dwarfs in the 1937 Disney movie Snow White . Klaus Doldingers soundtrack for the 1981 movie Das Boot includes a title score with a sonar sound to reflect the U-boat setting. John Barry integrated into the title song of Moonraker (1979) a sound representing the beep of a Sputnik like satellite.
The most realistic sound effects may originate from original sources; the closest sound to machine-gun fire could be an original recording of actual machine guns.
Despite this, real life and actual practice do not always coincide with theory. When recordings of real life do not sound realistic on playback, Foley and f/x are used to create more convincing sounds. For example, the realistic sound of bacon frying can be the crumpling of cellophane, while rain may be recorded as salt falling on a piece of tinfoil.
Less realistic sound effects are digitally synthesized or sampled and sequenced (the same recording played repeatedly using a sequencer). When the producer or content creator demands high-fidelity sound effects, the sound editor usually must augment his available library with new sound effects recorded in the field.
When the required sound effect is of a small subject, such as scissors cutting, cloth ripping, or footsteps, the sound effect is best recorded in a studio, under controlled conditions in a process known as foley. Many sound effects cannot be recorded in a studio, such as explosions, gunfire, and automobile or aircraft maneuvers. These effects must be recorded by a professional audio engineer.
When such "big" sounds are required, the recordist will begin contacting professionals or technicians in the same way a producer may arrange a crew; if the recordist needs an explosion, he may contact a demolition company to see if any buildings are scheduled to be destroyed with explosives in the near future. If the recordist requires a volley of cannon fire, he may contact historical re-enactors or gun enthusiasts.
Depending on the effect, recordists may use several DAT, hard disk, or Nagra recorders and a large number of microphones. During a cannon- and musket-fire recording session for the 2003 film The Alamo , conducted by Jon Johnson and Charles Maynes, two to three DAT machines were used. One machine was stationed near the cannon itself, so it could record the actual firing. Another was stationed several hundred yards away, below the trajectory of the ball, to record the sound of the cannonball passing by. When the crew recorded musket-fire, a set of microphones were arrayed close to the target (in this case a swine carcass) to record the musket-ball impacts.
A counter-example is the common technique for recording an automobile. For recording "Onboard" car sounds (which include the car interiors), a three-microphone technique is common. Two microphones record the engine directly: one is taped to the underside of the hood, near the engine block. The second microphone is covered in a wind screen and tightly attached to the rear bumper, within an inch or so of the tail pipe. The third microphone, which is often a stereo microphone, is stationed inside the car to get the car interior.
Having all of these tracks at once gives a sound designer or audio engineer a great deal of control over how he wants the car to sound. In order to make the car more ominous or low, he can mix in more of the tailpipe recording; if he wants the car to sound like it is running full throttle, he can mix in more of the engine recording and reduce the interior perspective. In cartoons, a pencil being dragged down a washboard may be used to simulate the sound of a sputtering engine.
What is considered today to be the first recorded sound effect was of Big Ben striking 10:30, 10:45, and 11:00. It was recorded on a brown wax cylinder by technicians at Edison House in London on July 16, 1890. This recording is currently in the public domain.
As the car example demonstrates, the ability to make multiple simultaneous recordings of the same subject—through the use of several DAT or multitrack recorders—has made sound recording into a sophisticated craft. The sound effect can be shaped by the sound editor or sound designer, not just for realism, but for emotional effect.
Once the sound effects are recorded or captured, they are usually loaded into a computer integrated with an audio non-linear editing system. This allows a sound editor or sound designer to heavily manipulate a sound to meet his or her needs.
The most common sound design tool is the use of layering to create a new, interesting sound out of two or three old, average sounds. For example, the sound of a bullet impact into a pig carcass may be mixed with the sound of a melon being gouged to add to the "stickiness" or "gore" of the effect. If the effect is featured in a close-up, the designer may also add an "impact sweetener" from his or her library. The sweetener may simply be the sound of a hammer pounding hardwood, equalized so that only the low-end can be heard. The low end gives the three sounds together added weight, so that the audience actually "feels" the weight of the bullet hit the victim.
If the victim is the villain, and his death is climactic, the sound designer may add reverb to the impact, in order to enhance the dramatic beat. And then, as the victim falls over in slow motion, the sound editor may add the sound of a broom whooshing by a microphone, pitch-shifted down and time-expanded to further emphasize the death. If the film is science-fiction, the designer may phaser the "whoosh" to give it a more sci-fi feel. (For a list of many sound effects processes available to a sound designer, see the bottom of this article.)
When creating sound effects for films, sound recordists and editors do not generally concern themselves with the verisimilitude or accuracy of the sounds they present. The sound of a bullet entering a person from a close distance may sound nothing like the sound designed in the above example, but since very few people are aware of how such a thing actually sounds, the job of designing the effect is mainly an issue of creating a conjectural sound which feeds the audience's expectations while still suspending disbelief.
In the previous example, the phased 'whoosh' of the victim's fall has no analogue in real life experience, but it is emotionally immediate. If a sound editor uses such sounds in the context of emotional climax or a character's subjective experience, they can add to the drama of a situation in a way visuals simply cannot. If a visual effects artist were to do something similar to the 'whooshing fall' example, it would probably look ridiculous or at least excessively melodramatic.
The "Conjectural Sound" principle applies even to happenstance sounds, such as tires squealing, doorknobs turning or people walking. If the sound editor wants to communicate that a driver is in a hurry to leave, he will cut the sound of tires squealing when the car accelerates from a stop; even if the car is on a dirt road, the effect will work if the audience is dramatically engaged. If a character is afraid of someone on the other side of a door, the turning of the doorknob can take a second or more, and the mechanism of the knob can possess dozens of clicking parts. A skillful Foley artist can make someone walking calmly across the screen seem terrified simply by giving the actor a different gait.
In music and film/television production, typical effects used in recording and amplified performances are:
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.
An effects unit or effects pedal is an electronic device that alters the sound of a musical instrument or other audio source through audio signal processing.
Distortion is the alteration of the original shape of something. In communications and electronics it means the alteration of the waveform of an information-bearing signal, such as an audio signal representing sound or a video signal representing images, in an electronic device or communication channel.
Binaural recording is a method of recording sound that uses two microphones, arranged with the intent to create a 3-D stereo sound sensation for the listener of actually being in the room with the performers or instruments. This effect is often created using a technique known as "dummy head recording", wherein a mannequin head is outfitted with a microphone in each ear. Binaural recording is intended for replay using headphones and will not translate properly over stereo speakers. This idea of a three dimensional or "internal" form of sound has also translated into useful advancement of technology in many things such as stethoscopes creating "in-head" acoustics and IMAX movies being able to create a three dimensional acoustic experience.
A production sound mixer, location sound recordist, location sound engineer, or simply sound mixer is the member of a film crew or television crew responsible for recording all sound recording on set during the filmmaking or television production using professional audio equipment, for later inclusion in the finished product, or for reference to be used by the sound designer, sound effects editors, or foley artists. This requires choice and deployment of microphones, choice of recording media, and mixing of audio signals in real time.
Audio feedback is a special kind of positive loop gain which occurs when a sound loop exists between an audio input and an audio output. In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting sound is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. For small PA systems the sound is readily recognized as a loud squeal or screech. The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen, hence the name Larsen effect.
Reverberation, in psychoacoustics and acoustics, is a persistence of sound after the sound is produced. A reverberation, or reverb, is created when a sound or signal is reflected causing numerous reflections to build up and then decay as the sound is absorbed by the surfaces of objects in the space – which could include furniture, people, and air. This is most noticeable when the sound source stops but the reflections continue, their amplitude decreasing, until zero is reached.
Multitrack recording (MTR), also known as multitracking or tracking, is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete "tracks" on the same reel-to-reel tape was developed. A "track" was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized.
Flanging is an audio effect produced by mixing two identical signals together, one signal delayed by a small and gradually changing period, usually smaller than 20 milliseconds. This produces a swept comb filter effect: peaks and notches are produced in the resulting frequency spectrum, related to each other in a linear harmonic series. Varying the time delay causes these to sweep up and down the frequency spectrum. A flanger is an effects unit that creates this effect.
An echo chamber is a hollow enclosure used to produce reverberation, usually for recording purposes. For example, the producers of a television or radio program might wish to produce the aural illusion that a conversation is taking place in a large room or a cave; these effects can be accomplished by playing the recording of the conversation inside an echo chamber, with an accompanying microphone to catch the reverberation. Nowadays effects units are more widely used to create such effects, but echo chambers are still used today, such as the famous echo chambers at Capitol Studios.
A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.
Automatic double-tracking or artificial double-tracking (ADT) is an analogue recording technique designed to enhance the sound of voices or instruments during the mixing process. It uses tape delay to create a delayed copy of an audio signal which is then combined with the original. The effect is intended to simulate the sound of the natural doubling of voices or instruments achieved by double tracking. The technique was originally developed in 1966 by engineers at Abbey Road Studios in London at the request of The Beatles.
A phaser is an electronic sound processor used to filter a signal by creating a series of peaks and troughs in the frequency spectrum. The position of the peaks and troughs of the waveform being affected is typically modulated by an internal low-frequency oscillator so that they vary over time, creating a sweeping effect.
Moogerfooger is the trademark for a series of analog effects pedals manufactured by Moog Music. There are currently eight different pedals produced; however, one of these models is designed for processing control voltages rather than audio signal. A sixth model, the Analog Delay, was released in a limited edition of 1000 units and has become a collector's item. Moog Music announced on August 28, 2018 that the Moogerfooger, CP-251, Minifooger, Voyager synthesizers, and some other product lines were being built using the remaining parts on hand and discontinued thereafter.
Delay is an audio signal processing technique that records an input signal to a storage medium and then plays it back after a period of time. The delayed signal may be played back multiple times, or fed back into the recording, to create the sound of a repeating, decaying echo.
Re-amping is a process often used in multitrack recording in which a recorded signal is routed back out of the editing environment and run through external processing using effects units and then into a guitar amplifier and a guitar speaker cabinet or a reverb chamber. Originally, the technique was used mostly for electric guitars: it facilitates a separation of guitar playing from guitar amplifier processing—a previously recorded audio program is played back and re-recorded at a later time for the purpose of adding effects, ambiance such as reverb or echo, and the tone shaping imbued by certain amps and cabinets. The technique has since evolved over the 2000s to include many other applications. Re-amping can also be applied to other instruments and program, such as recorded drums, synthesizers, and virtual instruments.
In filmmaking, Foley is the reproduction of everyday sound effects that are added to films, videos, and other media in post-production to enhance audio quality. These reproduced sounds, named after sound-effects artist Jack Foley, can be anything from the swishing of clothing and footsteps to squeaky doors and breaking glass. Foley sounds are used to enhance the auditory experience of the movie. Foley can also be used to cover up unwanted sounds captured on the set of a movie during filming, such as overflying airplanes or passing traffic.
In sound recording and reproduction, audio mixing is the process of optimizing and combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo field are adjusted and balanced. Audio mixing techniques and approaches vary widely and have a significant influence on the final product.
A mixing engineer is responsible for combining ("mixing") different sonic elements of an auditory piece into a complete rendition, whether in music, film, or any other content of auditory nature. The finished piece, recorded or live, must achieve a good balance of properties, such as volume, pan positioning, and other effects, while resolving any arising frequency conflicts from various sound sources. These sound sources can comprise the different musical instruments or vocals in a band or orchestra, dialogue or foley in a film, and more.
Calf Studio Gear, often referred to as Calf Plugins, is a set of open source LV2 plugins for the Linux platform. The suite intends to be a complete set of plugins for audio mixing, virtual instruments and mastering. As of version 0.90.0 there are 47 plugins in the suite.
|Wikimedia Commons has media related to Sound effects .|