Polynomial texture mapping (PTM), also known as Reflectance Transformation Imaging (RTI), is a technique of imaging and interactively displaying objects under varying lighting conditions to reveal surface phenomena. The data acquisition method is Single Camera Multi Light (SCML). [1]
The method was originally developed by Tom Malzbender of HP Labs in order to generate enhanced 3D computer graphics and it has since been adopted for cultural heritage applications. [2]
A series of images is captured in a darkened environment with the camera in a fixed position and the object lit from different angles (Single Camera Multi Light). Interactive software processes and combines the set of images to enable the user inspecting the object to control a virtual light source. [2] The virtual light source may be manipulated to simulate light from different angles and of different intensity or wavelengths to illuminate the surface of artefacts and reveal details. [2] [3] Open-source tools for processing the captured images and publishing the resulting relightable images on the web are freely available. [4]
Polynomial texture mapping may be used for detailed recording and documentation, 3D modeling, edge detection, and to aid the study of inscriptions, rock art [5] and other artefacts. [3] [6] It has been applied to hundreds of the Vindolanda tablets by the Centre for the Study of Ancient Documents at the University of Oxford in conjunction with the British Museum. [7] It has also been deployed, by Ben Altshuler of the Institute for Digital Archaeology, to scan the Philae obelisk at Kingston Lacy and the Parian Chronicle at the Ashmolean Museum; in both cases scans revealed significant, previously illegible text. [8] [9] [10] Method was also used for identifying microscopic worked antler from Star Carr and recording ancient rock art in Armenia. [11]
A 'dome' supporting twenty-four lights has been used to image paintings in the National Gallery and produce polynomial texture maps, providing information on condition phenomena for conservation purposes. [12] Studies of the technique at the National Gallery and Tate concluded that it is an effective tool for documenting changes in the condition of paintings, more easily repeatable than raking light photography, and therefore could be used to assess paintings during structural treatment and before and after loan. [13] Twelve dome-based systems built by the University of Southampton have been used to capture thousands of cuneiform tablets at various museums. [14] [15] [16]
The technique is now also finding uses in the field of forensic science, for example in imaging footprints, tyre marks, and indented writing.
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as a rendering. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.
Texture mapping is a method for mapping a texture on a computer-generated graphic. "Texture" in this context can be high frequency detail, surface texture, or color.
Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.
2.5D perspective refers to gameplay or movement in a video game or virtual reality environment that is restricted to a two-dimensional (2D) plane with little to no access to a third dimension in a space that otherwise appears to be three-dimensional and is often simulated and rendered in a 3D digital environment.
In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.
Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept 123123123123123 renders changing 3D environments to produce an illusion of motion.
Subsurface scattering (SSS), also known as subsurface light transport (SSLT), is a mechanism of light transport in which light that penetrates the surface of a translucent object is scattered by interacting with the material and exits the surface potentially at a different point. Light generally penetrates the surface and gets scattered a number of times at irregular angles inside the material before passing back out of the material at a different angle than it would have had if it had been reflected directly off the surface.
In computer graphics, reflection mapping or environment mapping is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture. The texture is used to store the image of the distant environment surrounding the rendered object.
3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance. The collected data can then be used to construct digital 3D models.
Spectral imaging is imaging that uses multiple bands across the electromagnetic spectrum. While an ordinary camera captures light across three wavelength bands in the visible spectrum, red, green, and blue (RGB), spectral imaging encompasses a wide variety of techniques that go beyond RGB. Spectral imaging may use the infrared, the visible spectrum, the ultraviolet, x-rays, or some combination of the above. It may include the acquisition of image data in visible and non-visible bands simultaneously, illumination from outside the visible range, or the use of optical filters to capture a specific spectral range. It is also possible to capture hundreds of wavelength bands for each pixel in an image.
Shadow mapping or shadowing projection is a process by which shadows are added to 3D computer graphics. This concept was introduced by Lance Williams in 1978, in a paper entitled "Casting curved shadows on curved surfaces." Since then, it has been used both in pre-rendered and realtime scenes in many console and PC games.
3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.
The following outline is provided as an overview of and topical guide to photography:
Range imaging is the name for a collection of techniques that are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device.
Inpainting is a conservation process where damaged, deteriorated, or missing parts of an artwork are filled in to present a complete image. This process is commonly used in image restoration. It can be applied to both physical and digital art mediums such as oil or acrylic paintings, chemical photographic prints, sculptures, or digital images and video.
Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
A digital outcrop model (DOM), also called a virtual outcrop model, is a digital 3D representation of the outcrop surface, mostly in a form of textured polygon mesh.
This is a glossary of terms relating to computer graphics.
Cultural property imaging is a necessary part of long term preservation of cultural heritage. While the physical conditions of objects will change over time, imaging serves as a way to document and represent heritage in a moment in time of the life of the item. Different methods of imaging produce results that are applicable in various circumstances. Not every method is appropriate for every object, and not every object needs to be imaged by multiple methods. In addition to preservation and conservation-related concerns, imaging can also serve to enhance research and study of cultural heritage.
Digital archaeology is the application of information technology and digital media to archaeology. It includes the use of digital photography, 3D reconstruction, virtual reality, and geographical information systems, among other techniques. Computational archaeology, which covers computer-based analytical methods, can be considered a subfield of digital archaeology, as can virtual archaeology.
{{cite journal}}
: CS1 maint: date and year (link)