Neil Dodgson | |
---|---|
Born | Neil Anthony Dodgson 1966 (age 56–57) |
Nationality | |
Alma mater | |
Known for | |
Scientific career | |
Fields |
|
Institutions | University of Cambridge |
Thesis | Image resampling (1992) |
Doctoral advisor | Neil Wiseman [2] |
Website | www |
Neil Anthony Dodgson is Professor of Computer Graphics at the Victoria University of Wellington. He was previously (until 2016) Professor of Graphics and Imaging in the Computer Laboratory at the University of Cambridge in England, where he worked in the Rainbow Group on computer graphics and interaction. [3] [1]
Dodgson graduated with a Bachelor of Science degree in Computer Science and Physics from Massey University in 1988 and subsequently worked there as a Junior Lecturer in Computer Science for one year. [4] He was awarded a Cambridge Commonwealth Trust Prince of Wales Scholarship to study at the University of Cambridge, where he worked on image resampling supervised by Neil Wiseman and graduating with a PhD in 1992. [5]
Dodgson worked for many years on stereoscopic 3D displays, conducting research principally into autostereoscopic methods. He has contributed to several surveys of the field [6] [7] [8] and has been on the committee of the annual Stereoscopic Displays and Applications conference since 2000, co-chairing the conference four times. [9]
With Malcolm Sabin, Dodgson has worked on subdivision surfaces since 2000. Dodgson's team produced the NURBS-compatible subdivision method in 2009. [10]
Dodgson has supervised almost twenty research students for PhDs. [11]
Dodgson also takes an interest in abstract art. [12]
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.The rendering equation is an integral equation based on light and is very computationally demanding
A point cloud is a discrete set of data points in space. The points may represent a 3D shape or object. Each point position has its set of Cartesian coordinates. Point clouds are generally produced by 3D scanners or by photogrammetry software, which measure many points on the external surfaces of objects around them. As the output of 3D scanning processes, point clouds are used for many purposes, including to create 3D computer-aided design (CAD) models for manufactured parts, for metrology and quality inspection, and for a multitude of visualizing, animating, rendering, and mass customization applications.
Stereoscopy is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek στερεός (stereos) 'firm, solid', and σκοπέω (skopeō) 'to look, to see'. Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.
In 3D computer graphics, anisotropic filtering is a method of enhancing the image quality of textures on surfaces of computer graphics that are at oblique viewing angles with respect to the camera where the projection of the texture appears to be non-orthogonal.
Martin Edward Newell is a British-born computer scientist specializing in computer graphics who is perhaps best known as the creator of the Utah teapot computer model.
A 3D display is a display device capable of conveying depth to the viewer. Many 3D displays are stereoscopic displays, which produce a basic 3D effect by means of stereopsis, but can cause eye strain and visual fatigue. Newer 3D displays such as holographic and light field displays produce a more realistic 3D effect by combining stereopsis and accurate focal length for the displayed content. Newer 3D displays in this manner cause less visual fatigue than classical stereoscopic displays.
A volumetric display device is a display device that forms a visual representation of an object in three physical dimensions, as opposed to the planar image of traditional screens that simulate depth through a number of different visual effects. One definition offered by pioneers in the field is that volumetric displays create 3D imagery via the emission, scattering, or relaying of illumination from well-defined regions in (x,y,z) space.
3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance. The collected data can then be used to construct digital 3D models.
In computer graphics and digital imaging, imagescaling refers to the resizing of a digital image. In video technology, the magnification of digital material is known as upscaling or resolution enhancement.
Autostereoscopy is any method of displaying stereoscopic images without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer's eyes are located. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, and may include Integral imaging, but notably do not include volumetric display or holographic displays.
Molecular graphics is the discipline and philosophy of studying molecules and their properties through graphical representation. IUPAC limits the definition to representations on a "graphical display device". Ever since Dalton's atoms and Kekulé's benzene, there has been a rich history of hand-drawn atoms and molecules, and these representations have had an important influence on modern molecular graphics.
The Electronic Visualization Laboratory (EVL) is an interdisciplinary research lab and graduate studies program at the University of Illinois at Chicago, bringing together faculty, students and staff primarily from the Art and Computer Science departments of UIC. The primary areas of research are in computer graphics, visualization, virtual and augmented reality, advanced networking, and media art. Graduates of EVL either earn a Masters or Doctoral degree in Computer Science.
A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic or multiscopic image without the need for the viewer to wear 3D glasses. Placed in front of the normal LCD, it consists of an opaque layer with a series of precisely spaced slits, allowing each eye to see a different set of pixels, so creating a sense of depth through parallax in an effect similar to what lenticular printing produces for printed products and lenticular lenses for other displays. A disadvantage of the method in its simplest form is that the viewer must be positioned in a well-defined spot to experience the 3D effect. However, recent versions of this technology have addressed this issue by using face-tracking to adjust the relative positions of the pixels and barrier slits according to the location of the user's eyes, allowing the user to experience the 3D from a wide range of positions. Another disadvantage is that the horizontal pixel count viewable by each eye is halved, reducing the overall horizontal resolution of the image.
3D computer graphics, sometimes called CGI, 3D-CGI or three-dimensional computer graphics are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.
Computer graphics deals with generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
Stereoscopic Displays and Applications (SD&A) is an academic technical conference in the field of stereoscopic 3D imaging. The conference started in 1990 and is held annually. The conference is held as part of the annual Electronic Imaging: Science and Technology Symposium organised by the Society for Imaging Science and Technology (IS&T).
Toby L. J. Howard is an Honorary Reader in the Department of Computer Science at the University of Manchester in the UK, He was Director of undergraduate studies 2011–2019. He retired from the University in July 2020.
A stereoscopic video game is a video game which uses stereoscopic technologies to create depth perception for the player by any form of stereo display. Such games should not to be confused with video games that use 3D game graphics on a mono screen, which give the illusion of depth only by monocular cues but lack binocular depth information.
Holly Rushmeier is an American computer scientist and is the John C. Malone Professor of Computer Science at Yale University. She is known for her contributions to the field of computer graphics.
Vergence-accommodation conflict (VAC), also known as accommodation-vergence conflict, is a visual phenomenon that occurs when the brain receives mismatching cues between vergence and accommodation of the eye. This commonly occurs in virtual reality devices, augmented reality devices, 3D movies, and other types of stereoscopic displays and autostereoscopic displays. The effect can be unpleasant and cause eye strain.