Johnson's criteria

Last updated

Johnson's criteria, or the Johnson criteria, created by John Johnson, describe both spatial domain and frequency domain approaches to analyze the ability of observers to perform visual tasks using image intensifier technology. [1] It was an important breakthrough in evaluating the performance of visual devices and guided the development of future systems. Using Johnson's criteria, many predictive models for sensor technology have been developed that predict the performance of sensor systems under different environmental and operational conditions.

Contents

History

Night vision systems enabled the measurement of visual thresholds following World War II. The 1950s also marked a time of notable development in the performance modeling of night vision imaging systems. From 1957 to 1958, Johnson, a United States Army Night Vision & Electronic Sensors Directorate (NVESD) [2] scientist, was working to develop methods of predicting target detection, orientation, recognition, and identification. Working with volunteer observers, Johnson used image intensifier equipment to measure the volunteer observer's ability to identify scale model targets under various conditions. His experiments produced the first empirical data on perceptual thresholds that was expressed in terms of line pairs. In the first Night Vision Image Intensifier Symposium in October 1958, Johnson presented his findings in a paper entitled "Analysis of Image Forming Systems", which contained the list that would later be known as Johnson's criteria.

Criteria

The minimum required resolution according to Johnson's criteria are expressed in terms of line pairs of image resolution across a target, in terms of several tasks: [3]

These amounts of resolution give a 50 percent probability of an observer discriminating an object to the specified level.

Additionally, the line pairs refer to lines being displayed on an interlaced CRT monitor. Each line pair corresponds to 2 pixels of a film image, or an image displayed on an LCD monitor.

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<i>Clementine</i> (spacecraft) American space project

Clementine was a joint space project between the Ballistic Missile Defense Organization and NASA, launched on January 25, 1994. Its objective was to test sensors and spacecraft components in long-term exposure to space and to make scientific observations of both the Moon and the near-Earth asteroid 1620 Geographos.

<span class="mw-page-title-main">Circle of confusion</span> Blurry region in optics

In optics, a circle of confusion (CoC) is an optical spot caused by a cone of light rays from a lens not coming to a perfect focus when imaging a point source. It is also known as disk of confusion, circle of indistinctness, blur circle, or blur spot.

<span class="mw-page-title-main">Machine vision</span> Technology and methods used to provide imaging-based automatic inspection and analysis

Machine vision is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environment vehicle guidance.

<span class="mw-page-title-main">Night vision</span> Ability to see in low light conditions

Night vision is the ability to see in low-light conditions, either naturally with scotopic vision or through a night-vision device. Night vision requires both sufficient spectral range and sufficient intensity range. Humans have poor night vision compared to many animals such as cats, dogs, foxes and rabbits, in part because the human eye lacks a tapetum lucidum, tissue behind the retina that reflects light back through the retina thus increasing the light available to the photoreceptors.

<span class="mw-page-title-main">Field of view</span> Extent of the observable world seen at any given moment

The field of view (FOV) is the angular extent of the observable world that is seen at any given moment. In the case of optical instruments or sensors, it is a solid angle through which a detector is sensitive to electromagnetic radiation. It is further relevant in photography.

<span class="mw-page-title-main">Night-vision device</span> Device that allows visualization of images in levels of light approaching total darkness

A night-vision device (NVD), also known as a night optical/observation device (NOD) or night-vision goggle (NVG), is an optoelectronic device that allows visualization of images in low levels of light, improving the user's night vision.

<span class="mw-page-title-main">Thermography</span> Infrared imaging used to reveal temperature

Infrared thermography (IRT), thermal video or thermal imaging, is a process where a thermal camera captures and creates an image of an object by using infrared radiation emitted from the object in a process, which are examples of infrared imaging science. Thermographic cameras usually detect radiation in the long-infrared range of the electromagnetic spectrum and produce images of that radiation, called thermograms. Since infrared radiation is emitted by all objects with a temperature above absolute zero according to the black body radiation law, thermography makes it possible to see one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature; therefore, thermography allows one to see variations in temperature. When viewed through a thermal imaging camera, warm objects stand out well against cooler backgrounds; humans and other warm-blooded animals become easily visible against the environment, day or night. As a result, thermography is particularly useful to the military and other users of surveillance cameras.

An image intensifier or image intensifier tube is a vacuum tube device for increasing the intensity of available light in an optical system to allow use under low-light conditions, such as at night, to facilitate visual imaging of low-light processes, such as fluorescence of materials in X-rays or gamma rays, or for conversion of non-visible light sources, such as near-infrared or short wave infrared to visible. They operate by converting photons of light into electrons, amplifying the electrons, and then converting the amplified electrons back into photons for viewing. They are used in devices such as night-vision goggles.

<span class="mw-page-title-main">Visual acuity</span> Clarity of vision

Visual acuity (VA) commonly refers to the clarity of vision, but technically rates an animal's ability to recognize small details with precision. Visual acuity depends on optical and neural factors. Optical factors of the eye influence the sharpness of an image on its retina. Neural factors include the health and functioning of the retina, of the neural pathways to the brain, and of the interpretative faculty of the brain.

<span class="mw-page-title-main">Optical flow</span> Pattern of motion in a visual scene due to relative motion of the observer

Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image.

<span class="mw-page-title-main">Multispectral imaging</span> Capturing image data across multiple electromagnetic spectrum ranges

Multispectral imaging captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or detected with the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range. It can allow extraction of additional information the human eye fails to capture with its visible receptors for red, green and blue. It was originally developed for military target identification and reconnaissance. Early space-based imaging platforms incorporated multispectral imaging technology to map details of the Earth related to coastal boundaries, vegetation, and landforms. Multispectral imaging has also found use in document and painting analysis.

Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes to the optical resolution of the system; the environment in which the imaging is done often is a further important factor.

<span class="mw-page-title-main">High-speed photography</span> Photography genre

High-speed photography is the science of taking pictures of very fast phenomena. In 1948, the Society of Motion Picture and Television Engineers (SMPTE) defined high-speed photography as any set of photographs captured by a camera capable of 69 frames per second or greater, and of at least three consecutive frames. High-speed photography can be considered to be the opposite of time-lapse photography.

Infrared vision is the capability of biological or artificial systems to detect infrared radiation. The terms thermal vision and thermal imaging are also commonly used in this context since infrared emissions from a body are directly related to their temperature: hotter objects emit more energy in the infrared spectrum than colder ones.

<span class="mw-page-title-main">Time-of-flight camera</span> Range imaging camera system

A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.

<span class="mw-page-title-main">AN/PSQ-20</span> US military night vision goggle

The AN/PSQ-20 Enhanced Night Vision Goggle (ENVG) is a third-generation passive monocular night vision device developed for the United States Armed Forces by ITT Exelis. It fuses image-intensifying and thermal-imaging technologies, enabling vision in conditions with very little light. The two methods can be used simultaneously or individually. The ENVG was selected by the US Army's Program Executive Office Soldier as a supporting device for the Future Force Warrior program in 2004, and is intended to replace the older AN/PVS-7 and AN/PVS-14 systems. Although more expensive and heavier than previous models, US Special Forces began using the goggles in 2008 and the US Army's 10th Mountain Division began fielding the AN/PSQ-20 in 2009. Improvements to the goggles have been attempted to make them lighter, as well as enabling the transmission of digital images to and from the battlefield.

Geometrical–optical are visual illusions, also optical illusions, in which the geometrical properties of what is seen differ from those of the corresponding objects in the visual field.

<span class="mw-page-title-main">IllumiRoom</span> Microsoft research Project

IllumiRoom is a Microsoft Research project that augments a television screen with images projected onto the wall and surrounding objects. The current proof-of-concept uses a Kinect sensor and video projector. The Kinect sensor captures the geometry and colors of the area of the room that surrounds the television, and the projector displays video around the television that corresponds to a video source on the television, such as a video game or movie.

<span class="mw-page-title-main">Enhanced flight vision system</span> Airborne system with imaging sensors

An enhanced flight vision system is an airborne system which provides an image of the scene and displays it to the pilot, in order to provide an image in which the scene and objects in it can be better detected. In other words, an EFVS is a system which provides the pilot with an image which is better than unaided human vision. An EFVS includes imaging sensors such as a color camera, infrared camera or radar, and typically a display for the pilot, which can be a head-mounted display or head-up display. An EFVS may be combined with a synthetic vision system to create a combined vision system.

References

  1. Sjaardema, Tracy A.; Smith, Collin S.; Birch, Gabriel Carisle (1 July 2015). "History and Evolution of the Johnson Criteria". Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). doi:10.2172/1222446 . Retrieved 19 January 2022.
  2. http://www.nvl.army.mil Archived 2011-05-17 at the Wayback Machine
  3. Norman S. Kopeika (1998). A system engineering approach to imaging. SPIE Press. p. 337. ISBN   978-0-8194-2377-1.

Further reading

Papers

Books