This article includes a list of general references, but it lacks sufficient corresponding inline citations .(March 2010) |
Time base correction (TBC) is a technique to reduce or eliminate errors caused by mechanical instability present in analog recordings on mechanical media. Without time base correction, a signal from a videotape recorder (VTR) or videocassette recorder (VCR), cannot be mixed with other, more time-stable devices such as character generators and video cameras found in television studios and post-production facilities.
Time base correction counteracts errors by buffering the video signal as it comes off the videotape at an unsteady rate, and releasing it after a delay at a steady rate. A sync generator provides the timing reference for all devices in the system. By adjusting the delay using a waveform monitor, the corrected signal can be made to match the timing of the other devices in the system. If all of the devices in a system are adjusted so their signals meet the video switcher at the same time and at the same rate, the signals can be mixed.
Though external TBCs are often used, most broadcast-quality VCRs have simple time base correctors built in. Some high-end domestic analog video recorders and camcorders also include a TBC circuit, which typically can be switched off if required.
As far back as 1956, professional reel-to-reel audio tape recorders were mechanically stable enough that pitch distortion could be below an audible level without time base correction. However, the higher sensitivity of video recordings meant that even the best mechanical solutions still resulted in detectable distortion of the video signals and difficulty in synchronizing with other devices. [1] A video signal consists of not only picture information, but also sync and subcarrier signals. Sync allows the image to be framed up square on the monitor and allows the combination and switching of two or more video signals. The subcarrier is involved in reproducing colors accurately. [a]
Implicit in the idea of time base correction is that there must be some target time base that the corrector is aiming for. There are two time bases commonly used.
Some TBCs featured drop-out compensation (DOC) that enabled videotape flaws caused by oxide defects to be temporarily concealed. The DOC logic required dedicated cabling between the videotape player and the TBC in which irregularities were detected in portions of the video image. Previously captured and stored lines of video would then be superimposed over the flawed video lines.
A variant of the time base corrector is the frame synchronizer which allows devices that cannot be steered by a sync signal also to be time base corrected or timed into a system. Satellites, microwave transmitters and other broadcast signals as well as consumer VTRs cannot be sent a sync signal. The synchronizer accomplishes this by writing the incoming digital video [c] into a frame buffer memory using the timing of the sync information contained in that video signal. A frame synchronizer stores at least a full frame of video. Simultaneously the digital video is being read back out of the buffer by an independent timing system that is genlocked to the house timing reference. If the buffer over or underfills, the Frame Sync will hold the last good frame of video until another full frame's worth of video is received. Usually, this is undetectable to viewers.
A modern 5th and final type of TBC being achieved in the late 2010s is software-defined. The python based project LD-Decode [3] (and its extended versions VHS-Decode [4] and CVBS-Decode [5] ) implement this "software time base correction" method. The programs take in raw PCM (or FLAC compressed) radio-frequency captures of analogue media signals, directly for baseband signals such as composite video but also applies de-modulation for tape formats before correcting the signal in software, this workflow is called "FM RF Archival" in the common use context of tape media preservation.
The decode programs outputs the corrected signals in ".tbc" & "_chroma.tbc" files, called CVBS and S-Video (Y+C) style file sets respectively as said data within can be combined luminance and chrominance, or separated. S-Video style (two files) was implemented for colour-under formats such as VHS & U-matic. The format contains a digital, lossless, 4fsc copy of the signal at 16 bits per sample [6] – not unlike the older D-3 digital videotape alongside a JSON file for technical stream data for other tools to read and process the files.
ld-analyse, a tool from the LD-decode project, allows for visual frame by frame analysis, closed captioning & VITC timecode readout using the TBC file. [7]
TBC files can have their chroma decoded to a uncompressed YUV [d] or RGB video stream via ld-chroma-decoder [8] then encoded into a video file stream typically lossless compressed codecs like FFV1 in the MKV container format via tools like FFmpeg or tbc-video-export [9] (a wrapper for the ld-* tools and FFmpeg) ready for use in NLEs, the project built decoder can produce the full 4fsc signal frame or just the active picture area, thus allowing for better visual domain preservation then legacy hardware.
TBC file streams can also be directly played back to analog TV systems via a DAC.
Sampling NTSC: 4fsc NTSC (14,318,181+9⁄11 Hz) [10]
Sampling PAL: 4fsc PAL (17,734,475 Hz) [10]
Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal.
Linear Timecode (LTC) is an encoding of SMPTE timecode data in an audio signal, as defined in SMPTE 12M specification. The audio signal is commonly recorded on a VTR track or other storage media. The bits are encoded using the biphase mark code : a 0 bit has a single transition at the start of the bit period. A 1 bit has two transitions, at the beginning and middle of the period. This encoding is self-clocking. Each frame is terminated by a 'sync word' which has a special predefined sync relationship with any video or film content.
Phase Alternating Line (PAL) is a colour encoding system for analog television. It was one of three major analogue colour television standards, the others being NTSC and SECAM. In most countries it was broadcast at 625 lines, 50 fields per second, and associated with CCIR analogue broadcast television systems B, D, G, H, I or K. The articles on analog broadcast television systems further describe frame rates, image resolution, and audio modulation.
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays of several types.
Vertical Interval Timecode is a form of SMPTE timecode encoded on one scan line in a video signal. These lines are typically inserted into the vertical blanking interval of the video signal.
Colorburst is an analog and composite video signal generated by a video-signal generator used to keep the chrominance subcarrier synchronized in a color television signal. By synchronizing an oscillator with the colorburst at the back porch (beginning) of each scan line, a television receiver is able to restore the suppressed carrier of the chrominance (color) signals, and in turn decode the color information. The most common use of colorburst is to genlock equipment together as a common reference with a vision mixer in a television studio using a multi-camera setup.
Composite video is an baseband analog video format that typically carries a 405, 525 or 625 line interlaced black and white or color signal, on a single channel, unlike the higher-quality S-Video and the even higher-quality YPbPr.
Genlock is a common technique where the video output of one source is used to synchronize other picture sources together. The aim in video applications is to ensure the coincidence of signals in time at a combining or switching point. When video instruments are synchronized in this way, they are said to be generator-locked, or genlocked.
SMPTE timecode is a set of cooperating standards to label individual frames of video or film with a timecode. The system is defined by the Society of Motion Picture and Television Engineers in the SMPTE 12M specification. SMPTE revised the standard in 2008, turning it into a two-part document: SMPTE 12M-1 and SMPTE 12M-2, including new explanations and clarifications.
Component video is an analog video signal that has been split into two or more component channels. In popular use, it refers to a type of component analog video (CAV) information that is transmitted or stored as three separate signals. Component video can be contrasted with composite video in which all the video information is combined into a single signal that is used in analog television. Like composite, component cables do not carry audio and are often paired with audio cables.
YPbPr or , also written as YPBPR, is a color space used in video electronics, in particular in reference to component video cables. Like YCBCR, it is based on gamma corrected RGB primaries; the two are numerically equivalent but YPBPR is designed for use in analog systems while YCBCR is intended for digital video. The EOTF may be different from common sRGB EOTF and BT.1886 EOTF. Sync is carried on the Y channel and is a bi-level sync signal, but in HD formats a tri-level sync is used and is typically carried on all channels.
U-matic or 3⁄4-inch Type E Helical Scan or SMPTE E is an analogue recording videocassette format first shown by Sony in prototype in October 1969, and introduced to the market in September 1971. It was among the first video formats to contain the videotape inside a cassette, as opposed to the various reel-to-reel or open-reel formats of the time. The videotape is 3⁄4 in (19 mm) wide, so the format is often known as "three-quarter-inch" or simply "three-quarter", compared to open reel videotape formats in use, such as 1 in (25 mm) type C videotape and 2 in (51 mm) quadruplex videotape.
A PCM adaptor is a device that encodes digital audio as video for recording on a videocassette recorder. The adapter also has the ability to decode a video signal back to digital audio for playback. This digital audio system was used for mastering early compact discs.
A video signal generator is a type of signal generator which outputs predetermined video and/or television oscillation waveforms, and other signals used in the synchronization of television devices and to stimulate faults in, or aid in parametric measurements of, television and video systems. There are several different types of video signal generators in widespread use. Regardless of the specific type, the output of a video generator will generally contain synchronization signals appropriate for television, including horizontal and vertical sync pulses or sync words. Generators of composite video signals will also include a colorburst signal as part of the output.
Audio-to-video synchronization refers to the relative timing of audio (sound) and video (image) parts during creation, post-production (mixing), transmission, reception and play-back processing. AV synchronization can be an issue in television, videoconferencing, or film.
A video decoder is an electronic circuit, often contained within a single integrated circuit chip, that converts base-band analog video signals to digital video. Video decoders commonly allow programmable control over video characteristics such as hue, contrast, and saturation. A video decoder performs the inverse function of a video encoder, which converts raw (uncompressed) digital video to analog video. Video decoders are commonly used in video capture devices and frame grabbers.
The 1/4 inch Akai is a portable helical scan EIA and CCIR analog recording video tape recorder (VTR) with two video record heads on the scanning drum. The units were available with an optional RF modulator to play back through a TV set, as well as a detachable video monitor. The Akai Electric Ltd. VTR plant was in Tokyo, Japan.
Broadcast-safe video is a term used in the broadcast industry to define video and audio compliant with the technical or regulatory broadcast requirements of the target area or region the feed might be broadcasting to. In the United States, the Federal Communications Commission (FCC) is the regulatory authority; in most of Europe, standards are set by the European Broadcasting Union (EBU).
This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.
In video, frame synchronization is the process of synchronizing display pixel scanning to a synchronization source. When several systems are connected, a synchronization signal is fed from the synchronization source to the other systems in the network, and the video signals are synchronized with each other.