Kell factor

Last updated
At 0.5 cycles/pixel, the Nyquist limit, signal amplitude depends on phase, as visible by the three medium-gray curves where the signal goes 90deg out of phase with the pixels. Kell factor1.png
At 0.5 cycles/pixel, the Nyquist limit, signal amplitude depends on phase, as visible by the three medium-gray curves where the signal goes 90° out of phase with the pixels.
At 0.33 cycles/pixel, 0.66 times the Nyquist limit, amplitude can largely be maintained regardless of phase. Some artifacts are still visible, but minor. Kell factor2.png
At 0.33 cycles/pixel, 0.66 times the Nyquist limit, amplitude can largely be maintained regardless of phase. Some artifacts are still visible, but minor.

The Kell factor, named after RCA engineer Raymond D. Kell, [1] is a parameter used to limit the bandwidth of a sampled image signal to avoid the appearance of beat frequency patterns when displaying the image in a discrete display device, usually taken to be 0.7. The number was first measured in 1934 by Raymond D. Kell and his associates as 0.64 but has suffered several revisions given that it is based on image perception, hence subjective, and is not independent of the type of display. [2] It was later revised to 0.85 but can go higher than 0.9, when fixed pixel scanning (e.g., CCD or CMOS) and fixed pixel displays (e.g., LCD or plasma) are used, or as low as 0.7 for electron gun scanning.

Contents

From a different perspective, the Kell factor defines the effective resolution of a discrete display device since the full resolution cannot be used without viewing experience degradation. The actual sampled resolution will depend on the spot size and intensity distribution. For electron gun scanning systems, the spot usually has a Gaussian intensity distribution. For CCDs, the distribution is somewhat rectangular, and is also affected by the sampling grid and inter-pixel spacing.

Kell factor is sometimes incorrectly stated to exist to account for the effects of interlacing. Interlacing itself does not affect Kell factor, but because interlaced video must be low-pass filtered (i.e., blurred) in the vertical dimension to avoid spatio-temporal aliasing (i.e., flickering effects), the Kell factor of interlaced video is said to be about 70% that of progressive video with the same scan line resolution.

The beat frequency problem

To understand how the distortion comes about, consider an ideal linear process from sampling to display. When a signal is sampled at a frequency that is at least double the Nyquist frequency, it can be fully reconstructed by low-pass filtering since the first repeat spectra does not overlap the original baseband spectra. In discrete displays the image signal is not low-pass filtered since the display takes discrete values as input, i.e. the signal displayed contains all the repeat spectra. The proximity of the highest frequency of the baseband signal to the lowest frequency of the first repeat spectra induces the beat frequency pattern. The pattern seen on screen can at times be similar to a Moiré pattern. The Kell factor is the reduction necessary in signal bandwidth such that no beat frequency is perceived by the viewer.

Examples

History

SourceKell factor
Kell, Bedford & Trainer (1934) [2] 0.64
Mertz & Gray (1934)0.53
Wheeler & Loughren (1938)0.71
Wilson (1938)0.82
Kell, Bedford & Fredendall (1940) [1] 0.85
Baldwin (1940)0.70

See also

Related Research Articles

<span class="mw-page-title-main">NTSC</span> Analog television system

NTSC is the first American standard for analog television, published in 1941. In 1961, it was assigned the designation System M. It is also known as EIA standard 170.

<span class="mw-page-title-main">Interlaced video</span> Technique for doubling the perceived frame rate of a video display

Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the characteristics of the human visual system.

<span class="mw-page-title-main">Aliasing</span> Signal processing effect

In signal processing and related disciplines, aliasing is the overlapping of frequency components resulting from a sample rate below the Nyquist rate. This overlap results in distortion or artifacts when the signal is reconstructed from samples which causes the reconstructed signal to differ from the original continuous signal. Aliasing that occurs in signals sampled in time, for instance in digital audio or the stroboscopic effect, is referred to as temporal aliasing. Aliasing in spatially sampled signals is referred to as spatial aliasing.

<span class="mw-page-title-main">Chroma subsampling</span> Practice of encoding images

Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.

<span class="mw-page-title-main">Sampling (signal processing)</span> Measurement of a signal at discrete time intervals

In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples". A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values.

<span class="mw-page-title-main">Spectrum analyzer</span> Electronic testing device

A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals. The input signal that most common spectrum analyzers measure is electrical; however, spectral compositions of other signals, such as acoustic pressure waves and optical light waves, can be considered through the use of an appropriate transducer. Spectrum analyzers for other types of signals also exist, such as optical spectrum analyzers which use direct optical techniques such as a monochromator to make measurements.

<span class="mw-page-title-main">Dot pitch</span> Distance between RGB dots (sub-pixels) on a display

Dot pitch is a specification for a computer display, computer printer, image scanner, or other pixel-based devices that describe the distance, for example, between dots (sub-pixels) on a display screen. In the case of an RGB color display, the derived unit of pixel pitch is a measure of the size of a triad plus the distance between triads.

<span class="mw-page-title-main">Display resolution</span> Width and height of a display in pixels

The display resolution or display modes of a digital television, computer monitor, or other display device is the number of distinct pixels in each dimension that can be displayed. It can be an ambiguous term especially as the displayed resolution is controlled by different factors in cathode ray tube (CRT) displays, flat-panel displays and projection displays using fixed picture-element (pixel) arrays.

HD-MAC was a broadcast television standard proposed by the European Commission in 1986, as part of Eureka 95 project. It belongs to the MAC - Multiplexed Analogue Components standard family. It is an early attempt by the EEC to provide High-definition television (HDTV) in Europe. It is a complex mix of analogue signal, multiplexed with digital sound, and assistance data for decoding (DATV). The video signal was encoded with a modified D2-MAC encoder.

1080i is a term used in high-definition television (HDTV) and video display technology. It means a video mode with 1080 lines of vertical resolution. The "i" stands for interlaced scanning method. This format was once a standard in HDTV. It was particularly used for broadcast television. This is because it can deliver high-resolution images without needing excessive bandwidth.

<span class="mw-page-title-main">480i</span> Standard-definition video mode

480i is the video mode used for standard-definition digital video in the Caribbean, Japan, South Korea, Taiwan, Philippines, Myanmar, Western Sahara, and most of the Americas. The other common standard definition digital standard, used in the rest of the world, is 576i.

<span class="mw-page-title-main">Flicker fixer</span> Video de-interlacer

A flicker fixer or scan doubler is a piece of computer hardware that de-interlaces an output video signal. The flicker fixer accomplishes this by adjusting the timing of the natively interlaced video signal to suit the needs of a progressive display Ex: CRT computer monitor. Flicker fixers in essence create a progressive frame of video from two interlaced fields of video.

<span class="mw-page-title-main">576i</span> Standard-definition video mode

576i is a standard-definition digital video mode, originally used for digitizing analogue television in most countries of the world where the utility frequency for electric power distribution is 50 Hz. Because of its close association with the legacy colour encoding systems, it is often referred to as PAL, PAL/SECAM or SECAM when compared to its 60 Hz NTSC-colour-encoded counterpart, 480i.

576p is the shorthand name for a video display resolution. The p stands for progressive scan, i.e. non-interlaced, the 576 for a vertical resolution of 576 pixels. Usually it corresponds to a digital video mode with a 4:3 anamorphic resolution of 720x576 and a frame rate of 25 frames per second (576p25), and thus using the same bandwidth and carrying the same amount of pixel data as 576i, but other resolutions and frame rates are possible.

Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes to the optical resolution of the system; the environment in which the imaging is done often is a further important factor.

Low-definition television (LDTV) refers to TV systems that have a lower screen resolution than standard-definition television systems. The term is usually used in reference to digital television, in particular when broadcasting at the same resolution as low-definition analog television systems. Mobile DTV systems usually transmit in low definition, as do all slow-scan television systems.

<span class="mw-page-title-main">Pixel aspect ratio</span> Proportion between the width and the height of a pixel

A Pixel aspect ratio is a mathematical ratio that describes how the width of a pixel in a digital image compared to the height of that pixel.

MUSE, commercially known as Hi-Vision was a Japanese analog high-definition television system, with design efforts going back to 1979.

Broadcast-safe video is a term used in the broadcast industry to define video and audio compliant with the technical or regulatory broadcast requirements of the target area or region the feed might be broadcasting to. In the United States, the Federal Communications Commission (FCC) is the regulatory authority; in most of Europe, standards are set by the European Broadcasting Union (EBU).

This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.

References

  1. 1 2 Kell; Bedford; Fredendall (July 1940). "A Determination Of Optimum Number Of Lines In A Television System". RCA Review. 5 (1): 8–30.
  2. 1 2 Kell, R. D.; Bedford, A.V.; Trainer, M. A. (November 1934). "An Experimental Television System". Proceedings of the Institute of Radio Engineers. 22 (11): 1246.