This article needs additional citations for verification .(January 2017) |
A lenslet is literally a small lens. The fact that distinguishes it from a small lens is that it is part of a lenslet array. A lenslet array consists of a set of lenslets in the same plane. Each lenslet normally has the same focal length.
Lenslets have many uses. One of the key applications for lenslets is in integral imaging and light field displays.
Lenslets are commonly found in Shack–Hartmann wavefront sensors and beam homogenization optics for projection systems. [1]
In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, NA has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective, and in fiber optics, in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it.
A dioptre or diopter is a unit of measurement with dimension of reciprocal length, equivalent to one reciprocal metre, 1 dioptre = 1 m−1. It is normally used to express the optical power of a lens or curved mirror, which is a physical quantity equal to the reciprocal of the focal length, expressed in metres. For example, a 3-dioptre lens brings parallel rays of light to focus at 1⁄3 metre. A flat window has an optical power of zero dioptres, as it does not cause light to converge or diverge. Dioptres are also sometimes used for other reciprocals of distance, particularly radii of curvature and the vergence of optical beams.
}}
Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of incoming wavefront distortions by deforming a mirror in order to compensate for the distortion. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, in microscopy, optical fabrication and in retinal imaging systems to reduce optical aberrations. Adaptive optics works by measuring the distortions in a wavefront and compensating for them with a device that corrects those errors such as a deformable mirror or a liquid crystal array.
In photography and optics, vignetting is a reduction of an image's brightness or saturation toward the periphery compared to the image center. The word vignette, from the same root as vine, originally referred to a decorative border in a book. Later, the word came to be used for a photographic portrait that is clear at the center and fades off toward the edges. A similar effect is visible in photographs of projected images or videos off a projection screen, resulting in a so-called "hotspot" effect.
NuSTAR is a NASA space-based X-ray telescope that uses a conical approximation to a Wolter telescope to focus high energy X-rays from astrophysical sources, especially for nuclear spectroscopy, and operates in the range of 3 to 79 keV.
In an optical system, the entrance pupil is the optical image of the physical aperture stop, as 'seen' through the front of the lens system. The corresponding image of the aperture as seen through the back of the lens system is called the exit pupil. If there is no lens in front of the aperture, the entrance pupil's location and size are identical to those of the aperture. Optical elements in front of the aperture will produce a magnified or diminished image that is displaced from the location of the physical aperture. The entrance pupil is usually a virtual image: it lies behind the first optical surface of the system.
In optics, optical power is the degree to which a lens, mirror, or other optical system converges or diverges light. It is equal to the reciprocal of the focal length of the device: P = 1/f. High optical power corresponds to short focal length. The SI unit for optical power is the inverse metre (m−1), which is commonly called the dioptre.
An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.
In Gaussian optics, the cardinal points consist of three pairs of points located on the optical axis of a rotationally symmetric, focal, optical system. These are the focal points, the principal points, and the nodal points. For ideal systems, the basic imaging properties such as image size, location, and orientation are completely determined by the locations of the cardinal points; in fact only four points are necessary: the focal points and either the principal or nodal points. The only ideal system that has been achieved in practice is the plane mirror, however the cardinal points are widely used to approximate the behavior of real optical systems. Cardinal points provide a way to analytically simplify a system with many components, allowing the imaging characteristics of the system to be approximately determined with simple calculations.
A wavefront curvature sensor is a device for measuring the aberrations of an optical wavefront. Like a Shack–Hartmann wavefront sensor it uses an array of small lenses to focus the wavefront into an array of spots. Unlike the Shack-Hartmann, which measures the position of the spots, the curvature sensor measures the intensity on either side of the focal plane. If a wavefront has a phase curvature, it will alter the position of the focal spot along the axis of the beam, thus by measuring the relative intensities in two places the curvature can be deduced.
A microlens is a small lens, generally with a diameter less than a millimetre (mm) and often as small as 10 micrometres (µm). The small sizes of the lenses means that a simple design can give good optical quality but sometimes unwanted effects arise due to optical diffraction at the small features. A typical microlens may be a single element with one plane surface and one spherical convex surface to refract the light. Because micro-lenses are so small, the substrate that supports them is usually thicker than the lens and this has to be taken into account in the design. More sophisticated lenses may use aspherical surfaces and others may use several layers of optical material to achieve their design performance.
Optics Software for Layout and Optimization (OSLO) is an optical design program originally developed at the University of Rochester in the 1970s. The first commercial version was produced in 1976 by Sinclair Optics. Since then, OSLO has been rewritten several times as computer technology has advanced. In 1993, Sinclair Optics acquired the GENII program for optical design, and many of the features of GENII are now included in OSLO. Lambda Research Corporation purchased the program from Sinclair Optics in 2001.
Precision glass moulding is a replicative process that allows the production of high precision optical components from glass without grinding and polishing. The process is also known as ultra-precision glass pressing. It is used to manufacture precision glass lenses for consumer products such as digital cameras, and high-end products like medical systems. The main advantage over mechanical lens production is that complex lens geometries such as aspheres can be produced cost-efficiently.
A Compound refractive lens (CRL) is a series of individual lenses arranged in a linear array in order to achieve focusing of X-rays in the energy range of 5-40 keV. They are an alternative to the KB mirror.
A METATOY is a sheet, formed by a two-dimensional array of small, telescopic optical components, that switches the path of transmitted light rays. METATOY is an acronym for "metamaterial for rays", representing a number of analogies with metamaterials; METATOYs even satisfy a few definitions of metamaterials, but are certainly not metamaterials in the usual sense. When seen from a distance, the view through each individual telescopic optical component acts as one pixel of the view through the METATOY as a whole. In the simplest case, the individual optical components are all identical; the METATOY then behaves like a homogeneous, but pixellated, window that can have very unusual optical properties.
Integral field spectrographs (IFS) combine spectrographic and imaging capabilities in the optical or infrared wavelength domains (0.32 μm – 24 μm) to get from a single exposure spatially resolved spectra in a bi-dimensional region. Developed at first for the study of astronomical objects, this technique is now also used in many other fields, such bio-medical science and Earth remote sensing, usually under the name of snapshot hyperspectral imaging.
Snapshot hyperspectral imaging is a method for capturing hyperspectral images during a single integration time of a detector array. No scanning is involved with this method and the lack of moving parts means that motion artifacts should be avoided. This instrument typically features detector arrays with a high number of pixels.
Holographic optical element (HOE) is an optical component (mirror, lens, directional diffuser, etc.) that produces holographic images using principles of diffraction. HOE is most commonly used in transparent displays, 3D imaging, and certain scanning technologies. The shape and structure of the HOE is dependent on the piece of hardware it is needed for, and the coupled wave theory is a common tool used to calculate the diffraction efficiency or grating volume that helps with the design of an HOE. Early concepts of the holographic optical element can be traced back to the mid-1900s, coinciding closely with the start of holography coined by Dennis Gabor. The application of 3D visualization and displays is ultimately the end goal of the HOE; however, the cost and complexity of the device has hindered the rapid development toward full 3D visualization. The HOE is also used in the development of augmented reality(AR) by companies such as Google with Google Glass or in research universities that look to utilize HOEs to create 3D imaging without the use of eye-wear or head-wear. Furthermore, the ability of the HOE to allow for transparent displays have caught the attention of the US military in its development of better head-up displays (HUD) which is used to display crucial information for aircraft pilots.
A 3D display is multiscopic if it projects more than two images out into the world, unlike conventional 3D stereoscopy, which simulates a 3D scene by displaying only two different views of it, each visible to only one of the viewer's eyes. Multiscopic displays can represent the subject as viewed from a series of locations, and allow each image to be visible only from a range of eye locations narrower than the average human interocular distance of 63 mm. As a result, not only does each eye see a different image, but different pairs of images are seen from different viewing locations.