This article does not cite any sources . (August 2009) (Learn how and when to remove this template message)
This article needs attention from an expert in Software.(February 2009)
A video sensor (also video-sensor or videosensors) describes a technique of digital image analysis. A video sensor is application software, which interprets images. Video sensors use programmable algorithms running on a computer.
In the broadest definition, a sensor is a device, module, or subsystem whose purpose is to detect events or changes in its environment and send the information to other electronics, frequently a computer processor. A sensor is always used with other electronics.
Application software is software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a word processor, a spreadsheet, an accounting application, a web browser, an email client,a media player, a file viewer, an aeronautical flight simulator, a console game or a photo editor. The collective noun application software refers to all applications collectively. This contrasts with system software, which is mainly involved with running the computer.
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system, and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.
Video sensors are used to evaluate scenes recorded by a video camera. Objects and their characteristics (size and speed for example) are verified and compared to the pre-set examples or templates. When there is a match between object and model, then the frame and the objects are marked digitally. The operator can recall the digital marked images for further use.
An object in image processing is an identifiable portion of an image that can be interpreted as a single unit or is an identifiable portion of an image that can be interpreted as a single unit.
Video sensors are mostly deployed with video surveillance (CCTV) systems. The commercial use of video sensors is increasing. Two main applications are electronic security and market research.
Closed-circuit television (CCTV), also known as video surveillance, is the use of video cameras to transmit a signal to a specific place, on a limited set of monitors. It differs from broadcast television in that the signal is not openly transmitted, though it may employ point to point (P2P), point to multipoint (P2MP), or mesh wired or wireless links. Though almost all video cameras fit this definition, the term is most often applied to those used for surveillance in areas that may need monitoring such as banks, stores, and other areas where security is needed. Though Videotelephony is seldom called 'CCTV' one exception is the use of video in distance education, where it is an important tool.
Security is freedom from, or resilience against, potential harm caused by others. Beneficiaries of security may be of persons and social groups, objects and institutions, ecosystems or any other entity or phenomenon vulnerable to unwanted change by its environment.
Market research is an organized effort to gather information about target markets or customers. It is a very important component of business strategy. The term is commonly interchanged with marketing research; however, expert practitioners may wish to draw a distinction, in that marketing research is concerned specifically about marketing processes, while market research is concerned specifically with markets.
Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.
In digital imaging, a pixel, pel, or picture element is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen.
Lidar is a surveying method that measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target. The name lidar, now used as an acronym of light detection and ranging, was originally a portmanteau of light and radar. Lidar sometimes is called 3D laser scanning, a special combination of a 3D scanning and laser scanning. It has terrestrial, airborne, and mobile applications.
A digital camera or digicam is a camera that captures photographs in digital memory. Most cameras produced today are digital, and while there are still dedicated digital cameras, many more cameras are now being incorporated into mobile devices, portable touchscreen computers, which can, among many other purposes, use their cameras to initiate live videotelephony and directly edit and upload imagery to others. However, high-end, high-definition dedicated cameras are still commonly used by professionals.
High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.
In photography, angle of view (AOV) describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the more general term field of view.
Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object and thus in contrast to on-site observation, especially the Earth. Remote sensing is used in numerous fields, including geography, land surveying and most Earth Science disciplines ; it also has military, intelligence, commercial, economic, planning, and humanitarian applications.
A video camera is a camera used for electronic motion picture acquisition, initially developed for the television industry but now common in other applications as well.
Motion detection is the process of detecting a change in the position of an object relative to its surroundings or a change in the surroundings relative to an object. Motion detection can be achieved by either mechanical or electronic methods. When motion detection is accomplished by natural organisms, it is called motion perception.
A digital watermark is a kind of marker covertly embedded in a noise-tolerant signal such as audio, video or image data. It is typically used to identify ownership of the copyright of such signal. "Watermarking" is the process of hiding digital information in a carrier signal; the hidden information should, but does not need to, contain a relation to the carrier signal. Digital watermarks may be used to verify the authenticity or integrity of the carrier signal or to show the identity of its owners. It is prominently used for tracing copyright infringements and for banknote authentication.
In photography and optics, vignetting (, UK also ; French: vignette) is a reduction of an image's brightness or saturation toward the periphery compared to the image center. The word vignette, from the same root as vine, originally referred to a decorative border in a book. Later, the word came to be used for a photographic portrait that is clear at the center and fades off toward the edges. A similar effect is visible in photographs of projected images or videos off a projection screen, resulting in a so-called "hotspot" effect.
A time delay and integration or time delay integration (TDI) charge-coupled device (CCD) is an image sensor for capturing images of moving objects at low light levels. The motion it can capture is similar to that captured by a line-scan CCD which uses a single line of photo-sensitive elements to capture one image strip of a scene that is moving at a right angle to the line of elements. A line-scan CCD needs to have high light levels, however, in order to register the light quickly before the motion causes smearing of the image. The TDI CCD overcomes this illumination limitation by having multiple rows of elements which each shift their partial measurements to the adjacent row synchronously with the motion of the image across the array of elements. This provides high sensitivity for moving images unobtainable using conventional CCD arrays or single-line-scan devices.
Image noise is random variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that obscures the desired information.
A telecentric lens is a compound lens that has its entrance or exit pupil at infinity; in the prior case, this produces an orthographic view of the subject. This means that the chief rays are parallel to the optical axis in front of or behind the system, respectively. The simplest way to make a lens telecentric is to put the aperture stop at one of the lens's focal points.
An image sensor or imager is a sensor that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, digital imaging tends to replace analog imaging.
The following are common definitions related to the machine vision field.
Image stabilization (IS) is a family of techniques that reduce blurring associated with the motion of a camera or other imaging device during exposure.
A tactile sensor is a device that measures information arising from physical interaction with its environment. Tactile sensors are generally modeled after the biological sense of cutaneous touch which is capable of detecting stimuli resulting from mechanical stimulation, temperature, and pain. Tactile sensors are used in robotics, computer hardware and security systems. A common application of tactile sensors is in touchscreen devices on mobile phones and computing.
The Nikon Expeed image/video processors are media processors for Nikon's digital cameras. They perform a large number of tasks: Bayer filtering, demosaicing, image sensor corrections/dark-frame subtraction, image noise reduction, image sharpening, image scaling, gamma correction, image enhancement/Active D-Lighting, colorspace conversion, chroma subsampling, framerate conversion, lens distortion/chromatic aberration correction, image compression/JPEG encoding, video compression, display/video interface driving, digital image editing, face detection, audio processing/compression/encoding and computer data storage/data transmission.
|This graphics software–related article is a stub. You can help Wikipedia by expanding it.|