Dual photography is a photographic technique that uses Helmholtz reciprocity to capture the light field of all light paths from a structured illumination source to a camera. [1] Image processing software can then be used to reconstruct the scene as it would have been seen from the viewpoint of the projector.
The Helmholtz reciprocity principle describes how a ray of light and its reverse ray encounter matched optical adventures, such as reflections, refractions, and absorptions in a passive medium, or at an interface. It does not apply to moving, non-linear, or magnetic media.
The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by the radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working for several years. The phrase light field was coined by Andrey Gershun in a classic paper on the radiometric properties of light in three-dimensional space (1936).
Depth of field is the distance between the nearest and the furthest objects that are in acceptably sharp focus in an image. The depth of field is determined by focal length, distance to subject, the acceptable circle of confusion size, and aperture. A particular depth of field may be chosen for technical or artistic purposes. Some post-processing methods, such as focus stacking allow extended depth of field that would be impossible with traditional techniques.
Photography is the art, application and practice of creating durable images by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film. It is employed in many fields of science, manufacturing, and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication.
A camera is an optical instrument to capture still images or to record moving images, which are stored in a physical medium such as in a digital system or on photographic film. A camera consists of a lens which focuses light from the scene, and a camera body which holds the image capture mechanism.
A pinhole camera is a simple camera without a lens but with a tiny aperture, a pinhole – effectively a light-proof box with a small hole in one side. Light from a scene passes through the aperture and projects an inverted image on the opposite side of the box, which is known as the camera obscura effect.
Astrophotography is photography of astronomical objects, celestial events, and areas of the night sky. The first photograph of an astronomical object was taken in 1840, but it was not until the late 19th century that advances in technology allowed for detailed stellar photography. Besides being able to record the details of extended objects such as the Moon, Sun, and planets, astrophotography has the ability to image objects invisible to the human eye such as dim stars, nebulae, and galaxies. This is done by long time exposure since both film and digital cameras can accumulate and sum light photons over these long periods of time.
In photography, shutter speed or exposure time is the length of time when the film or digital sensor inside the camera is exposed to light, also when a camera's shutter is open when taking a photograph. The amount of light that reaches the film or image sensor is proportional to the exposure time. 1⁄500 of a second will let half as much light in as 1⁄250.
In photography, bokeh is the aesthetic quality of the blur produced in the out-of-focus parts of an image produced by a lens. Bokeh has been defined as "the way the lens renders out-of-focus points of light". Differences in lens aberrations and aperture shape cause some lens designs to blur the image in a way that is pleasing to the eye, while others produce blurring that is unpleasant or distracting . Bokeh occurs for parts of the scene that lie outside the depth of field. Photographers sometimes deliberately use a shallow focus technique to create images with prominent out-of-focus regions.
A camera lens is an optical lens or assembly of lenses used in conjunction with a camera body and mechanism to make images of objects either on photographic film or on other media capable of storing an image chemically or electronically.
Cinematography is the science or art of motion-picture photography by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as film stock.
Underwater photography is the process of taking photographs while under water. It is usually done while scuba diving, but can be done while diving on surface supply, snorkeling, swimming, from a submersible or remotely operated underwater vehicle, or from automated cameras lowered from the surface.
Panoramic photography is a technique of photography, using specialized equipment or software, that captures images with horizontally elongated fields of view. It is sometimes known as wide format photography. The term has also been applied to a photograph that is cropped to a relatively wide aspect ratio, like the familiar letterbox format in wide-screen video.
Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing. Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.
In photography, stopping down refers to increasing the numerical f-stop number, which decreases the size (diameter) of the aperture of a lens, resulting in reducing the amount of light entering the iris of a lens.
The science of photography refers to the use of science, such as chemistry and physics, in all aspects of photography. This applies to the camera, its lenses, physical operation of the camera, electronic camera internals, and the process of developing film in order to take and develop pictures properly.
Aperture priority, often abbreviated A or Av on a camera mode dial, is a setting on some cameras that allows the user to set a specific aperture value (f-number) while the camera selects a shutter speed to match it that will result in proper exposure based on the lighting conditions as measured by the camera's light meter. This is different from manual mode, where the user must decide both values, shutter priority where the user picks a shutter speed with the camera selecting an appropriate aperture, or program mode where the camera selects both.
A light field camera, also known as plenoptic camera, captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the direction that the light rays are traveling in space. This contrasts with a conventional camera, which records only light intensity.
The following outline is provided as an overview of and topical guide to photography:
Landscape photography shows spaces within the world, sometimes vast and unending, but other times microscopic. Landscape photographs typically capture the presence of nature but can also focus on man-made features or disturbances of landscapes.
Afocal photography, also called afocal imaging or afocal projection is a method of photography where the camera with its lens attached is mounted over the eyepiece of another image forming system such as an optical telescope or optical microscope, with the camera lens taking the place of the human eye.
Lytro, Inc. was an American company founded in 2006 by Ren Ng which developed light-field cameras. Lytro began shipping its first generation pocket-sized camera, capable of refocusing images after being taken, in 8 GB and 16 GB versions on February 29, 2012. In April 2014, the company announced Lytro Illum, its second generation camera for commercial and experimental photographers. The Lytro Illum was released at $1,600. The Illum has a permanently attached 30–250mm f/2.0 lens and an articulated rear screen. In the fall of 2015, Lytro changed direction, announcing Immerge, a very-high-end VR video capture camera with companion custom compute server. Immerge was expected to ship in 2016, and be useful to studios trying to combine CGI-based VR with video VR.