Manufacturer | Evans & Sutherland Computer Corp. |
---|---|
Type | computer graphics terminal and 3D display processor |
Release date | October 1973; 51 years ago |
Display | 256 by 256 |
Graphics | raster, black and white |
The Shaded Picture System was a 3D raster computer display processor introduced by Evans & Sutherland in October 1973. [1]
The Shaded Picture System was the first general-purpose, commercially available raster computer graphics display processor capable of real-time, shaded 3D graphics. It could only display black and white graphics at a resolution of 256 by 256. [2] It was extremely expensive, and very few units were ever sold. [3]
The principles of shaded, hidden-line true 3D graphics were pioneered at the University of Utah in 1967. [4] However, this algorithm was slow and would take several minutes to produce an image. In 1970, Gary Watkins developed a FORTRAN simulator of a faster algorithm that would theoretically generate shaded 3D images in real-time, "if implemented in suitable hardware". [5] [2] The simulator itself was still not capable of real-time shaded 3D image rendering. Evans & Sutherland developed a functional prototype of this "suitable hardware", which was later sold as the Shaded Picture System in 1973. [2]
About a year earlier in 1972, Evans & Sutherland sold the first and only CT1 to Case Western Reserve University. [6] The CT1, or Continuous Tone 1, was a specialized image generator, not meant as a marketable or mass-produced product. At the time, the CT1, along with G.E./NASA's upgraded Electronic Scene Generator from 1971, [7] would have been the only real-time raster graphics systems sold to customers comparable to the Shaded Picture System, although both the CT1 and Electronic Scene Generator were intentionally produced as one-off products and specialized for the needs of their customers. [6] The Shaded Picture System, in contrast, was intentionally marketed. [2]
In early 1975, Evans & Sutherland demonstrated a random-access video frame buffer using relatively low-cost semiconductor memory, which was much more capable than the Shaded Picture System. [8] When interfaced with a (non-shaded) E&S Picture System, the frame buffer had a resolution of 512 by 512 in grayscale and partial color capabilities. By the end of 1975, this frame buffer was commercially available. [9]
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as a rendering. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.
Vector graphics are a form of computer graphics in which visual images are created directly from geometric shapes defined on a Cartesian plane, such as points, lines, curves and polygons. The associated mechanisms may include vector display and printing hardware, vector data models and file formats, as well as the software based on these data models. Vector graphics are an alternative to raster or bitmap graphics, with each having advantages and disadvantages in specific situations.
Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.
In computer graphics, rasterisation or rasterization is the task of taking an image described in a vector graphics format (shapes) and converting it into a raster image. The rasterized image may then be displayed on a computer display, video display or printer, or stored in a bitmap file format. Rasterization may refer to the technique of drawing 3D models, or to the conversion of 2D rendering primitives, such as polygons and line segments, into a rasterized format.
In computer science, binary space partitioning (BSP) is a method for space partitioning which recursively subdivides a Euclidean space into two convex sets by using hyperplanes as partitions. This process of subdividing gives rise to a representation of objects within the space in the form of a tree data structure known as a BSP tree.
A framebuffer is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor.
Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.
Evans & Sutherland is an American computer graphics firm founded in 1968 by David Evans and Ivan Sutherland. Its current products are used in digital projection environments like planetariums. Its simulation business, which it sold to Rockwell Collins, sold products that were used primarily by the military and large industrial firms for training and simulation.
In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.
In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.
An image file format is a file format for a digital image. There are many formats that can be used, such as JPEG, PNG, and GIF. Most formats up until 2022 were for storing 2D images, not 3D ones. The data stored in an image file format may be compressed or uncompressed. If the data is compressed, it may be done so using lossy compression or lossless compression. For graphic design applications, vector formats are often used. Some image file formats support transparency.
3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.
3D computer graphics, sometimes called CGI, 3-D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.
A vector monitor, vector display, or calligraphic display is a display device used for computer graphics up through the 1970s. It is a type of CRT, similar to that of an early oscilloscope. In a vector display, the image is composed of drawn lines rather than a grid of glowing pixels as in raster graphics. The electron beam follows an arbitrary path, tracing the connected sloped lines rather than following the same horizontal raster path for all images. The beam skips over dark areas of the image without visiting their points.
Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
InfiniteReality refers to a 3D graphics hardware architecture and a family of graphics systems that implemented the aforementioned hardware architecture that was developed and manufactured by Silicon Graphics from 1996 to 2005. The InfiniteReality was positioned as Silicon Graphics' high-end visualization hardware for their MIPS/IRIX platform and was used exclusively in their Onyx family of visualization systems, which are sometimes referred to as "graphics supercomputers" or "visualization supercomputers". The InfiniteReality was marketed to and used by large organizations such as companies and universities that are involved in computer simulation, digital content creation, engineering and research.
The history of computer animation began as early as the 1940s and 1950s, when people began to experiment with computer graphics – most notably by John Whitney. It was only by the early 1960s when digital computers had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr. Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved 2-D imagery, though increasingly as computer power improved, efforts to achieve 3-D realism became the emphasis. By the late 1980s, photo-realistic 3-D was beginning to appear in film movies, and by mid-1990s had developed to the point where 3-D animation could be used for entire feature film production.
Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static or dynamic. CGI both refers to 2D computer graphics and 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects. The application of CGI for creating/improving animations is called computer animation, or CGI animation.
The Kahlert School of Computing is a school within the College of Engineering at the University of Utah in Salt Lake City, Utah.
This is a glossary of terms relating to computer graphics.