Sound-in-Syncs

Last updated

Sound-in-Syncs is a method of multiplexing sound and video signals into a channel designed to carry video, in which data representing the sound is inserted into the line synchronising pulse of an analogue television waveform. This is used on point-to-point links within broadcasting networks, including studio/transmitter links (STL). It is not used for broadcasts to the public.

Contents

History

The technique was first developed by the BBC in the late 1960s. In 1966, The corporation's Research Department made a feasibility study of the use of pulse-code modulation (PCM) for transmitting television sound during the synchronising period of the video signal. This had several advantages: it removed the necessity for a separate sound link, reduced the possibility of operational errors and offered improved sound quality and reliability. [1]

Awards

Sound-in-Syncs and its R&D engineers have won several awards, including:

Versions

Original mono S-i-S

In the original system, as applied to 625 line analogue TV, the audio signal was sampled twice during each television line and each sample converted to 10-bit PCM. Two such samples were inserted into the next line synchronising pulse. At the destination, the audio samples were converted back to analogue form and the video waveform restored to normal. Compandors operating on the signal before encoding and after decoding enabled the required signal-to-noise ratio to be achieved. As the PCM noise was predominantly high-pitched, the compandor only needed to operate on the high frequencies. Also, the compandor only operated at high audio levels, so that modulation of the noise by the companding would be masked by the relatively loud high-frequency audio components. A pilot tone at half the sampling frequency was transmitted to enable the expander to track the gain adjustment applied by the compressor, even when the latter was limiting. [1]

Following successful trials with the BBC, in 1971 Pye TVT started to make and sell the S-i-S equipment under licence. The largest quantities went to the BBC itself, to the EBU and to Canada. Smaller numbers went to other countries including South Africa, Australia and Japan. [5]

Ruggedised S-i-S

A ruggedised version of the system was developed, which provided about 7 kHz audio bandwidth, for use over noisy or difficult microwave paths, such as those often encountered for outside broadcasts. [6]

Stereo S-i-S

Later systems, developed in the 1980s, used 14-bit linear PCM samples, digitally companded into 10-bit samples by means of NICAM-3 lossy compression. These were capable of carrying two audio channels and were known as stereo Sound-in-Syncs.

ITV S-i-S

The ITV network used coders and encoders produced by RE of Denmark. The two variations of Sound-in-Syncs used by the BBC and ITV were not compatible. The terms DCSIS or DSIS was commonly used in ITV to describe dual channel Sound-in-Syncs. Very often material carried was dual mono and not stereo.

Notes and references

  1. 1 2 Pawley, E (1972). BBC Engineering 1922-1972, pp. 506-7, 522. BBC. ISBN   0-563-12127-0.
  2. BBC Research: RTS Awards
  3. BBC Research: Queen's Awards
  4. BBC Research: Emmy Award for Sound in Syncs
  5. Holder, J.E., Spenceley, N.M. and Clementson, C.S. (1984), A two channel sound in syncs transmission system, IBC 1984, IEE Conference Publication No. 240, p. 345
  6. Dalton, C.J. (1971), A P.C.M. Sound-in-Syncs System for Outside Broadcasts, BBC Engineering, No. 86, April 1971, pp 18-28.

See also

Further reading

Related Research Articles

Analog television Television that uses analog signals

Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal.

In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a separate signal called the modulation signal that typically contains information to be transmitted. For example, the modulation signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer. The carrier is higher in frequency than the modulation signal. The purpose of modulation is to impress the information on the carrier wave, which is used to carry the information to another location. In radio communication the modulated carrier is transmitted through space as a radio wave to a radio receiver. Another purpose is to transmit multiple channels of information through a single communication medium, using frequency division multiplexing (FDM). For example in cable television which uses FDM, many carrier signals carrying different television channels are transported through a single cable to customers. Since each carrier occupies a different frequency, the channels do not interfere with each other. At the destination end, the carrier signal is demodulated to extract the information bearing modulation signal.

Speech coding is an application of data compression of digital audio signals containing speech. Speech coding uses speech-specific parameter estimation using audio signal processing techniques to model the speech signal, combined with generic data compression algorithms to represent the resulting modeled parameters in a compact bitstream.

A signal generator is one of a class of electronic devices that generates electronic signals with set properties of amplitude, frequency, and wave shape. These generated signals are used as a stimulus for electronic measurements, typically used in designing, testing, troubleshooting, and repairing electronic or electroacoustic devices, though it often has artistic uses as well.

Digital audio

Digital audio is a representation of sound recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical samples in a continuous sequence. For example, in CD audio, samples are taken 44,100 times per second, each with 16-bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form. Following significant advances in digital audio technology during the 1970s and 1980s, it gradually replaced analog audio technology in many areas of audio engineering and telecommunications in the 1990s and 2000s.

Pulse-width modulation

Pulse width modulation (PWM), or pulse-duration modulation (PDM), is a method of reducing the average power delivered by an electrical signal, by effectively chopping it up into discrete parts. The average value of voltage fed to the load is controlled by turning the switch between supply and load on and off at a fast rate. The longer the switch is on compared to the off periods, the higher the total power supplied to the load. Along with maximum power point tracking (MPPT), it is one of the primary methods of reducing the output of solar panels to that which can be utilized by a battery. PWM is particularly suited for running inertial loads such as motors, which are not as easily affected by this discrete switching, because their inertia causes them to react slowly. The PWM switching frequency has to be high enough not to affect the load, which is to say that the resultant waveform perceived by the load must be as smooth as possible.

AES3 is a standard for the exchange of digital audio signals between professional audio devices. An AES3 signal can carry two channels of PCM audio over several transmission media including balanced lines, unbalanced lines, and optical fiber.

Sampling (signal processing)

In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of samples.

Near Instantaneous Companded Audio Multiplex (NICAM) is an early form of lossy compression for digital audio. It was originally developed in the early 1970s for point-to-point links within broadcasting networks. In the 1980s, broadcasters began to use NICAM compression for transmissions of stereo TV sound to the public.

Continuously variable slope delta modulation is a voice coding method. It is a delta modulation with variable step size, first proposed by Greefkes and Riemens in 1970.

Delta-sigma modulation is a method for encoding analog signals into digital signals as found in an analog-to-digital converter (ADC). It is also used to convert high bit-count, low-frequency digital signals into lower bit-count, higher-frequency digital signals as part of the process to convert digital signals into analog as part of a digital-to-analog converter (DAC).

Peak programme meter

A peak programme meter (PPM) is an instrument used in professional audio that indicates the level of an audio signal.

The 405-line monochrome analogue television broadcasting system was the first fully electronic television system to be used in regular broadcasting.

Adaptive differential pulse-code modulation (ADPCM) is a variant of differential pulse-code modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the required data bandwidth for a given signal-to-noise ratio.

Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.

The output power of a TV transmitter is the electric power applied to antenna system. There are two definitions: nominal and thermal. Analogue television systems put about 70% to 90% of the transmitters power into the sync pulses. The remainder of the transmitter's power goes into transmitting the video's higher frequencies and the FM audio carrier. Digital television modulation systems are about 30% more efficient than analogue modulation systems overall.

Pulse-density modulation, or PDM, is a form of modulation used to represent an analog signal with a binary signal. In a PDM signal, specific amplitude values are not encoded into codewords of pulses of different weight as they would be in pulse-code modulation (PCM); rather, the relative density of the pulses corresponds to the analog signal's amplitude. The output of a 1-bit DAC is the same as the PDM encoding of the signal. Pulse-width modulation (PWM) is a special case of PDM where the switching frequency is fixed and all the pulses corresponding to one sample are contiguous in the digital signal. For a 50% voltage with a resolution of 8-bits, a PWM waveform will turn on for 128 clock cycles and then off for the remaining 128 cycles. With PDM and the same clock rate the signal would alternate between on and off every other cycle. The average is 50% for both waveforms, but the PDM signal switches more often. For 100% or 0% level, they are the same.

CCIR System A was the 405-line analog broadcast television system broadcast in the UK and Ireland. System A service was discontinued in 1985.

Black and burst Analogue synchronization signal used in broadcasting

Black and burst, also known as bi-level sync and black burst, is an analogue signal used in broadcasting. It is a composite video signal with a black picture. It is a reference signal to synchronise video equipment, in order to have them output video signals with the same timing. This allows seamless switching between two video signals.

A sound chip is an integrated circuit (chip) designed to produce audio signals. It might do this through digital, analog or mixed-mode electronics. Sound chips are typically fabricated on metal–oxide–semiconductor (MOS) mixed-signal chips that process audio signals. They normally contain things like oscillators, envelope controllers, samplers, filters, amplifiers, and/or envelope generators.