Non-photorealistic rendering (NPR) is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art, in contrast to traditional computer graphics, which focuses on photorealism. NPR is inspired by other artistic modes such as painting, drawing, technical illustration, and animated cartoons. NPR has appeared in movies and video games in the form of cel-shaded animation (also known as "toon" shading) as well as in scientific visualization, architectural illustration and experimental animation.[ citation needed ]
The term non-photorealistic rendering is believed to have been coined by the SIGGRAPH 1990 papers committee, who held a session entitled "Non Photo Realistic Rendering". [1] [2]
The term has received some criticism:
The first conference on non-photorealistic animation and rendering[ when? ] included a discussion of possible alternative names. Among those suggested were "expressive graphics", "artistic rendering", "non-realistic graphics", "art-based rendering", and "psychographics". All of these terms have been used in various research papers on the topic, but the "non-photorealistic" term seems to have nonetheless taken hold.
The first technical meeting dedicated to NPR was the ACM-sponsored Symposium on Non-Photorealistic Rendering and Animation (NPAR) in 2000. NPAR is traditionally co-located with the Annecy Animated Film Festival, [3] running on even numbered years. From 2007 onward, NPAR began to also run on odd-numbered years, co-located with ACM SIGGRAPH. [4]
Three-dimensional NPR is the style that is most commonly seen in video games and movies. The output from this technique is almost always a 3D model that has been modified from the original input model to portray a new artistic style. In many cases, the geometry of the model is identical to the original geometry, and only the material applied to the surface is modified. With increased availability of programmable GPU's, shaders have allowed NPR effects to be applied to the rasterised image that is to be displayed to the screen. [5] The majority of NPR techniques applied to 3D geometry are intended to make the scene appear two-dimensional.
NPR techniques for 3D images include cel shading and Gooch shading.
Many methods can be used to draw stylized outlines and strokes from 3D models, including occluding contours and Suggestive contours. [6]
For enhanced legibility, the most useful technical illustrations for technical communication are not necessarily photorealistic. Non-photorealistic renderings, such as exploded view diagrams, greatly assist in showing placement of parts in a complex system.
The input to a two dimensional NPR system is typically an image or video. The output is a typically an artistic rendering of that input imagery (for example in a watercolor, painterly or sketched style) although some 2D NPR serves non-artistic purposes e.g. data visualization.
The artistic rendering of images and video (often referred to as image stylization [7] ) traditionally focused upon heuristic algorithms that seek to simulate the placement of brush strokes on a digital canvas. [8]
Arguably, the earliest example of 2D NPR is Paul Haeberli's 'Paint by Numbers' at SIGGRAPH 1990. This (and similar interactive techniques) provide the user with a canvas that they can "paint" on using the cursor — as the user paints, a stylized version of the image is revealed on the canvas. This is especially useful for people who want to simulate different sizes of brush strokes according to different areas of the image.
Subsequently, basic image processing operations using gradient operators [9] or statistical moments [10] were used to automate this process and minimize user interaction in the late nineties (although artistic control remains with the user via setting parameters of the algorithms). This automation enabled practical application of 2D NPR to video, for the first time in the living paintings of the movie What Dreams May Come (1998).
More sophisticated image abstractions techniques were developed in the early 2000s harnessing computer vision operators e.g. image salience, [11] or segmentation [12] operators to drive stroke placement. Around this time, machine learning began to influence image stylization algorithms notably image analogy [13] that could learn to mimic the style of an existing artwork.
The advent of deep learning has re-kindled activity in image stylization, notably with neural style transfer (NST) algorithms that can mimic a wide gamut of artistic styles from single visual examples. These algorithms underpin mobile apps capable of the same e.g. Prisma
In addition to the above stylization methods, a related class of techniques in 2D NPR address the simulation of artistic media. These methods include simulating the diffusion of ink through different kinds of paper, and also of pigments through water for simulation of watercolor.
Artistic rendering is the application of visual art styles to rendering. For photorealistic rendering styles, the emphasis is on accurate reproduction of light-and-shadow and the surface properties of the depicted objects, composition, or other more generic qualities. When the emphasis is on unique interpretive rendering styles, visual information is interpreted by the artist and displayed accordingly using the chosen art medium and level of abstraction in abstract art. In computer graphics, interpretive rendering styles are known as non-photorealistic rendering styles, but may be used to simplify technical illustrations. Rendering styles that combine photorealism with non-photorealism are known as hyperrealistic rendering styles.
This section lists some seminal uses of NPR techniques in films, games and software. See cel-shaded animation for a list of uses of toon-shading in games and movies.
Short films | ||
---|---|---|
Technological Threat | 1988 | Early use of toon shading together with Tex Avery-style cartoon characters |
Gas Planet | 1992 | Pencil-sketching 3D rendering by Eric Darnell |
Fishing | 2000 | Watercolor-style 3D rendering David Gainey |
RoadHead Snack and Drink | 1998 1999 | Short films created with Rotoshop by Bob Sabiston |
Ryan | 2004 | Nonlinear projection and other distortions of 3D geometry |
The Girl Who Cried Flowers | 2008 | Watercolor-style rendering by Auryn |
Feature films | ||
What Dreams May Come | 1998 | Painterly rendering in the "painted world" sequence |
Tarzan | 1999 | First use of Disney's "Deep Canvas" system |
Waking Life | 2001 | First use of rotoshop in a feature film |
A Scanner Darkly | 2006 | "a 15-month animation process" |
Video games and other software | ||
Jet Set Radio | 2000 | Early use of toon-shading in video games |
SketchUp | 2000 | Sketch-like modelling software with toon rendering |
The Legend of Zelda: The Wind Waker | 2002 | One of the most well-known cel-shaded games |
Valkyria Chronicles | 2008 | Uses a number of NPR techniques in the game, including a sketch-like shading method |
XIII | 2003 | A game made as "comic"-like as possible |
Ōkami | 2006 | A game whose visuals emulate the style of sumi-e (Japanese ink wash painting) |
Guilty Gear Xrd | 2014 | Fighting game using cel-shaded 3D characters with limited animation to imitate the look of 2D sprites |
Vue Xstream | 2015 | 3D environment creation software featuring an NPR renderer with various traditional art style emulating presets |
Return of the Obra Dinn | 2018 | A 3D game rendered in a unique monochrome, pointillist style |
Manifold Garden | 2019 | A 3D puzzle game using impossible geometry, notable for its novel edge-shading techniques. [14] |
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.
Metropolis light transport (MLT) is a global illumination application of a variant of the Monte Carlo method called the Metropolis–Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three-dimensional scenes.
A particle system is a technique in game physics, motion graphics, and computer graphics that uses many minute sprites, 3D models, or other graphic objects to simulate certain kinds of "fuzzy" phenomena, which are otherwise very hard to reproduce with conventional rendering techniques – usually highly chaotic systems, natural phenomena, or processes caused by chemical reactions.
Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking behaviour of birds. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference. The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object.
In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.
The Catmull–Clark algorithm is a technique used in 3D computer graphics to create curved surfaces by using subdivision surface modeling. It was devised by Edwin Catmull and Jim Clark in 1978 as a generalization of bi-cubic uniform B-spline surfaces to arbitrary topology.
In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation.
3D rendering is the 3D computer graphics process of converting 3D models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic styles.
The definition of the BSDF is not well standardized. The term was probably introduced in 1980 by Bartell, Dereniak, and Wolfe. Most often it is used to name the general mathematical function which describes the way in which the light is scattered by a surface. However, in practice, this phenomenon is usually split into the reflected and transmitted components, which are then treated separately as BRDF and BTDF.
Patrick M. Hanrahan is an American computer graphics researcher, the Canon USA Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics processing units, as well as scientific illustration and visualization. He has received numerous awards, including the 2019 Turing Award.
Fluid animation refers to computer graphics techniques for generating realistic animations of fluids such as water and smoke. Fluid animations are typically focused on emulating the qualitative visual behavior of a fluid, with less emphasis placed on rigorously correct physical results, although they often still rely on approximate solutions to the Euler equations or Navier–Stokes equations that govern real fluid physics. Fluid animation can be performed with different levels of complexity, ranging from time-consuming, high-quality animations for films, or visual effects, to simple and fast animations for real-time animations like computer games.
Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
Andrew Paul Witkin was an American computer scientist who made major contributions in computer vision and computer graphics.
In scientific visualization, line integral convolution (LIC) is a method to visualize a vector field, such as fluid motion.
In computer graphics, free-form deformation (FFD) is a geometric technique used to model simple deformations of rigid objects. It is based on the idea of enclosing an object within a cube or another hull object, and transforming the object within the hull as the hull is deformed. Deformation of the hull is based on the concept of so-called hyper-patches, which are three-dimensional analogs of parametric curves such as Bézier curves, B-splines, or NURBs. The technique was first described by Thomas W. Sederberg and Scott R. Parry in 1986, and is based on an earlier technique by Alan Barr. It was extended by Coquillart to a technique described as extended free-form deformation, which refines the hull object by introducing additional geometry or by using different hull objects such as cylinders and prisms.
Maureen C. Stone is an American computer scientist, specializing in color modeling.
Nadia Magnenat Thalmann is a computer graphics scientist and robotician and is the founder and head of MIRALab at the University of Geneva. She has chaired the Institute for Media Innovation at Nanyang Technological University (NTU), Singapore from 2009 to 2021.
Gradient domain image processing, also called Poisson image editing, is a type of digital image processing that operates on the differences between neighboring pixels, rather than on the pixel values directly. Mathematically, an image gradient represents the derivative of an image, so the goal of gradient domain processing is to construct a new image by integrating the gradient, which requires solving Poisson's equation.
Gooch shading is a non-photorealistic rendering technique for shading objects. It is also known as "cool to warm" shading, and is widely used in technical illustration.
Michael F. Cohen is an American computer scientist and researcher in computer graphics. He was a senior research scientist at Microsoft Research for 21 years until he joined Facebook Research in 2015. In 1998, he received the ACM SIGGRAPH CG Achievement Award for his work in developing radiosity methods for realistic image synthesis. He was elected a Fellow of the Association for Computing Machinery in 2007 for his "contributions to computer graphics and computer vision." In 2019, he received the ACM SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics for “his groundbreaking work in numerous areas of research—radiosity, motion simulation & editing, light field rendering, matting & compositing, and computational photography”.
{{cite web}}
: CS1 maint: archived copy as title (link)Some key papers in the development of NPR are: