An editor has nominated this article for deletion. You are welcome to participate in the deletion discussion , which will decide whether or not to retain it. |
A number of formats have been and are used to transmit and store film and video recordings. These can vary based on aspect ratio, frame rate, resolution, and file format.
Analog broadcasting systems—PAL/SECAM and NTSC—were historically limited in the set of moving image formats they could transmit and present. PAL/SECAM can transmit 25 Hz and 50 Hz material, and NTSC can only transmit 30 Hz and 60 Hz material (later replaced by 30/1.001 and 60/1.001 Hz). Both systems were also limited to an aspect ratio of 4:3 and fixed resolution (limited by the available bandwidth). While the wider aspect ratios were relatively straightforward to adapt to 4:3 frame (for instance by letterboxing), the frame rate conversion is not straightforward, and in many cases degrades the "fluidity" of motion, or quality of individual frames (especially when either the source or the target of the frame rate conversion is interlaced or inter-frame mixing is involved in the rate conversion).
Material for local TV markets is usually captured at 25 Hz or 50 Hz. Many broadcasters have film archives of 24 frame/s (film speed) content related to news gathering or television production.
Live broadcasts (news, sports, important events) are usually captured at 50 Hz. Using 25 Hz (de-interlacing essentially) for live broadcasts makes them look like they are taken from an archive, so the practice is usually avoided unless there is a motion processor in the transmission chain.
Usually 24 Hz material from film is usually sped up by 4%, when it is of feature film origin. The sound is also raised in pitch slightly as a result of the 4% speedup but pitch correction circuits are typically used.
With roughly 30 or 60 Hz material, imported from 60 Hz systems, is usually adapted for presentation at 50 Hz by adding duplicate frames or dropping excessive frames, sometimes also involving intermixing consecutive frames. Nowadays, digital motion analysis, although complex and expensive, can produce a superior-looking conversion (though not absolutely perfect).
Because of higher television production budgets in the US, and a preference for the look of film, many prerecorded TV shows were, in fact, captured onto film at 24 Hz.
Source material filmed at 24 Hz is converted to roughly 60 Hz using the technique called 3:2 pulldown, which includes inserting variable number of duplicate frames, with additional slowdown by the factor of 1.001, if needed. Occasionally, inter-frame mixing is used to smooth the judder.
Live programs are captured at roughly 60 Hz. In the last 15 years, 30 Hz has also become a feasible capture rate when a more "film like" look is desired, but ordinary video cameras are used. Capture on video at the film rate of 24 Hz is an even more recent development, and mostly accompanies HDTV production. Unlike 30 Hz capture, 24 Hz cannot be simulated in post production. The camera must be natively capable of capturing at 24 Hz during recording. Because the ~30 Hz material is more "fluid" than 24 Hz material, the choice between ~30 and ~60 rate is not as obvious as that between 25 Hz and 50 Hz. When printing 60 Hz video to film, it has always been necessary to convert it to 24 Hz using the reverse 3:2 pulldown. The look of the finished product can resemble that of film, however it is not as smooth, (particularly if the result is returned to video) and a badly done deinterlacing causes image to noticeably shake in vertical direction and lose detail.
References to "60 Hz" and "30 Hz" in this context are shorthand, and always refer to the 59.94 Hz or 60 x 1000/1001 rate. Only black and white video and certain HDTV prototypes ever ran at true 60.000 Hz. The US HDTV standard supports both true 60 Hz and 59.94 Hz; the latter is almost always used for better compatibility with NTSC.
25 or 50 Hz material, imported from 50 Hz systems, can be adapted to 60 Hz similarly, by dropping or adding frames and intermixing consecutive frames. The best quality for 50 Hz material is provided by digital motion analysis.
Digital video is free of many of the limitations of analog transmission formats and presentation mechanisms (e.g. CRT display) because it decouples the behavior of the capture process from the presentation process. As a result, digital video provides the means to capture, convey and present moving images in their original format, as intended by directors (see article about purists), regardless of variations in video standards.
Frame grabbers that employ MPEG or other compression formats are able to encode moving image sequences in their original aspect ratios, resolution and frame capture rates (24/1.001, 24, 25, 30/1.001, 30, 50, 60/1.001, 60 Hz). MPEG—and other compressed video formats that employ motion analysis—help to mitigate the incompatibilities among the various video formats used around the world.
At the receiving end, a digital display is free to independently present the image sequence at a multiple of its capture rate, thus reducing visible flicker. Most modern displays are "multisync," meaning that they can refresh the image display at a rate most suitable for the image sequence being presented. For example, a multisync display may support a range of vertical refresh rates from 50 to 72 Hz, or from 96 to 120 Hz, so that it can display all standard capture rates by means of an integer rate conversion.
60 Hz material captures motion a bit more "smoother" than 50 Hz material. The drawback is that it takes approximately 1/5 more bandwidth to transmit, if all other parameters of the image (resolution, aspect ratio) are equal. "Approximately", because interframe compression techniques, such as MPEG, are a bit more efficient with higher frame rates, because the consecutive frames also become a bit more similar.
There are, however, technical and political obstacles for adopting a single worldwide video format. The most important technical problem is that quite often the lighting of the scene is achieved with lamps which flicker at a rate related to the local mains frequency. For instance the mercury lighting used in stadia (twice the mains frequency). Capturing video under such conditions must be done at a matching rate, or the colours will flicker badly on the screen. Even an AC incandescent light may be a problem for a camera if it is underpowered or near the end of its useful life.
The necessity to select a single universal video format (for the sake of the global material interchange) should anyway become irrelevant in the digital age. The director of video production would then be free to select the most appropriate format for the job, and a video camera would become a global instrument (currently the market is very fragmented).
MPEG-2 is a standard for "the generic coding of moving pictures and associated audio information". It describes a combination of lossy video compression and lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. While MPEG-2 is not as efficient as newer standards such as H.264/AVC and H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting and in the DVD-Video standard.
NTSC is the first American standard for analog television, published and adopted in 1941. In 1961, it was assigned the designation System M. It is also known as EIA standard 170.
Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the characteristics of the human visual system.
Telecine is the process of transferring film into video and is performed in a color suite. The term is also used to refer to the equipment used in this post-production process.
Advanced Television Systems Committee (ATSC) standards are an international set of standards for broadcast and digital television transmission over terrestrial, cable and satellite networks. It is largely a replacement for the analog NTSC standard and, like that standard, is used mostly in the United States, Mexico, Canada, South Korea and Trinidad & Tobago. Several former NTSC users, such as Japan, have not used ATSC during their digital television transition, because they adopted other systems such as ISDB developed by Japan, and DVB developed in Europe, for example.
Broadcasttelevision systems are the encoding or formatting systems for the transmission and reception of terrestrial television signals.
The refresh rate, also known as vertical refresh rate or vertical scan rate in reference to terminology originating with the cathode-ray tubes (CRTs), is the number of times per second that a raster-based display device displays a new image. This is independent from frame rate, which describes how many images are stored or generated every second by the device driving the display. On CRT displays, higher refresh rates produce less flickering, thereby reducing eye strain. In other technologies such as liquid-crystal displays, the refresh rate affects only how often the image can potentially be updated.
In video technology, 24p refers to a video format that operates at 24 frames per second frame rate with progressive scanning. Originally, 24p was used in the non-linear editing of film-originated material. Today, 24p formats are being increasingly used for aesthetic reasons in image acquisition, delivering film-like motion characteristics. Some vendors advertise 24p products as a cheaper alternative to film acquisition.
Deinterlacing is the process of converting interlaced video into a non-interlaced or progressive form. Interlaced video signals are commonly found in analog television, VHS, Laserdisc, digital television (HDTV) when in the 1080i format, some DVD titles, and a smaller number of Blu-ray discs.
720p is a progressive HD signal format with 720 horizontal lines/1280 columns and an aspect ratio (AR) of 16:9, normally known as widescreen HD (1.78:1). All major HD broadcasting standards include a 720p format, which has a resolution of 1280×720p.
1080i is a term used in high-definition television (HDTV) and video display technology. It means a video mode with 1080 lines of vertical resolution. The "i" stands for interlaced scanning method. This format was once a standard in HDTV. It was particularly used for broadcast television. This is because it can deliver high-resolution images without needing excessive bandwidth. This format is used in the SMPTE 292M standard.
576i is a standard-definition digital video mode, originally used for digitizing 625 line analogue television in most countries of the world where the utility frequency for electric power distribution is 50 Hz. Because of its close association with the legacy colour encoding systems, it is often referred to as PAL, PAL/SECAM or SECAM when compared to its 60 Hz NTSC-colour-encoded counterpart, 480i.
High-definition video is video of higher resolution and quality than standard-definition. While there is no standardized meaning for high-definition, generally any video image with considerably more than 480 vertical scan lines or 576 vertical lines (Europe) is considered high-definition. 480 scan lines is generally the minimum even though the majority of systems greatly exceed that. Images of standard resolution captured at rates faster than normal, by a high-speed camera may be considered high-definition in some contexts. Some television series shot on high-definition video are made to look as if they have been shot on film, a technique which is often known as filmizing.
1080p is a set of HDTV high-definition video modes characterized by 1,920 pixels displayed across the screen horizontally and 1,080 pixels down the screen vertically; the p stands for progressive scan, i.e. non-interlaced. The term usually assumes a widescreen aspect ratio of 16:9, implying a resolution of 2.1 megapixels. It is often marketed as Full HD or FHD, to contrast 1080p with 720p resolution screens. Although 1080p is sometimes referred to as 2K resolution, other sources differentiate between 1080p and (true) 2K resolution.
Progressive segmented Frame is a scheme designed to acquire, store, modify, and distribute progressive scan video using interlaced equipment.
Three-two pull down is a term used in filmmaking and television production for the post-production process of transferring film to video.
Television standards conversion is the process of changing a television transmission or recording from one video system to another. Converting video between different numbers of lines, frame rates, and color models in video pictures is a complex technical problem. However, the international exchange of television programming makes standards conversion necessary so that video may be viewed in another nation with a differing standard. Typically video is fed into video standards converter which produces a copy according to a different video standard. One of the most common conversions is between the NTSC and PAL standards.
Rec. 709, also known as Rec.709, BT.709, and ITU 709, is a standard developed by ITU-R for image encoding and signal characteristics of high-definition television.
High-definition television (HDTV) describes a television or video system which provides a substantially higher image resolution than the previous generation of technologies. The term has been used since at least 1933; in more recent times, it refers to the generation following standard-definition television (SDTV). It is the standard video format used in most broadcasts: terrestrial broadcast television, cable television, satellite television.