Popping (computer graphics)

Last updated
This is an exaggerated example of a 3D object's geometrically being reduced using a level of detail technique. LOD0 is the highest detail version of the object and each subsequent LOD reduces the quality of the object. A change without intermediate steps from LOD1 to LOD2 will be obvious to the viewer. LOD Example.png
This is an exaggerated example of a 3D object's geometrically being reduced using a level of detail technique. LOD0 is the highest detail version of the object and each subsequent LOD reduces the quality of the object. A change without intermediate steps from LOD1 to LOD2 will be obvious to the viewer.

In 3D computer graphics, popping refers to an undesirable visual effect that occurs when the transition of a 3D object to a different pre-calculated level of detail (LOD) is abrupt and obvious to the viewer. [1] The LOD-ing algorithm reduces the geometrical complexity of a 3D object the further it is from the viewer and returns that lost complexity as the viewer gets closer to the 3D object, causing it to pop as it becomes suddenly more detailed. The LOD-ing algorithms can depend on more factors than just distance from the viewer, but it is often the primary factor that is considered. Popping is most obvious when switching between different LODs directly without intermediate steps. Techniques like geomorphing and LOD blending can reduce visual popping significantly by making the transitions more gradual.

Contents

LOD Blending

An exaggerated example of LOD blending to illustrate how apparent the ghosting effect can be. Level of Detial Blending.png
An exaggerated example of LOD blending to illustrate how apparent the ghosting effect can be.

Also known as alpha blending, or alpha compositing this technique reduces popping by displaying both LODs of a 3D model simultaneously and blending them together over a small transition period. [2]

During the blending process an alpha value is specified for each LOD, which determines transparency of objects. At the beginning of the transition, the initial LOD would have an alpha value of 1.0 (fully opaque) and the new LOD would have an alpha value of 0.0 (fully transparent). As the viewer approaches the 3D object and reaches the distance when the LOD change would normally occur, the LOD alpha values would gradually switch until the new LOD has an alpha value of 1.0, at which point the initial LOD would no longer be rendered. [3]

It is important to stress that LOD blending only occurs at the distance that a LOD would normally change and only over a small distance. So if during a simulation the LOD change would occur at the 100 units of distance then the LOD blending process would begin at the 95 units of distance and be complete by 105 units of distance.

LOD blending has two major disadvantages. It is expensive in terms of computing power since both LODs are rendered simultaneously for the blend to occur and can be counterproductive since the reason to use LOD-ing algorithms is to reduce the expense of rendering scenes. The technique is not effective when the viewer is close to the 3D object since the blending process will be obviously apparent and result in a visible ghosting effect.

Geomorphing

Geomorphing creates a smoother transition between LOD0 and LOD1 by creating approximated meshes to act as intermediate steps. Geomorphing Example.png
Geomorphing creates a smoother transition between LOD0 and LOD1 by creating approximated meshes to act as intermediate steps.

Geomorphing is a technique that reduces popping during LOD changes by adding approximations of the 3D model to serve as intermediate steps between two LODs to create a smooth transition. Edge collapses (removing vertices) and vertex splits (adding vertices) are the primary operations to modify the 3D model using this method.

Traditional geomorphing creates a sequence of 3D models between two LODs. The sequence cannot be interrupted once it has begun and no modifications can be done to it until the LOD change is complete. Due to this restriction, traditional geomorphing is not suited to interactive simulations because the process cannot be quickly reversed if conditions change unexpectedly.

Real-time geomorphing directly modifies individual vertices of the 3D model to adjust its LOD. This allows changes made to the geomorph during any frame, either to halt ongoing morphs or initiate further morphs of the 3D model. Since multiple vertices can be triggered to morph independently one of another, certain vertices need to be temporarily locked to ensure a smooth transition occurs, which could result in a delayed LOD change. The flexibility of real-time geomorphing makes it an effective solution for interactive simulations. [4]

See also

Related Research Articles

<span class="mw-page-title-main">Texture mapping</span> Method of defining surface detail on a computer-generated graphic or 3D model

Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.

<span class="mw-page-title-main">Volume rendering</span> Representing a 3D-modeled object or dataset as a 2D projection

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

<span class="mw-page-title-main">Skeletal animation</span> Computer animation technique

Skeletal animation or rigging is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character and a hierarchical set of interconnected parts, a virtual armature used to animate the mesh. While this technique is often used to animate humans and other organic figures, it only serves to make the animation process more intuitive, and the same technique can be used to control the deformation of any object—such as a door, a spoon, a building, or a galaxy. When the animated object is more general than, for example, a humanoid character, the set of "bones" may not be hierarchical or interconnected, but simply represent a higher-level description of the motion of the part of mesh it is influencing.

<span class="mw-page-title-main">Polygon mesh</span> Set of polygons to define a 3D model

In 3D computer graphics and solid modeling, a polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object. The faces usually consist of triangles, quadrilaterals (quads), or other simple convex polygons (n-gons), since this simplifies rendering, but may also be more generally composed of concave polygons, or even polygons with holes.

In the field of 3D computer graphics, a subdivision surface is a curved surface represented by the specification of a coarser polygon mesh and produced by a recursive algorithmic method. The curved surface, the underlying inner mesh, can be calculated from the coarse mesh, known as the control cage or outer mesh, as the functional limit of an iterative process of subdividing each polygonal face into smaller faces that better approximate the final underlying curved surface. Less commonly, a simple algorithm is used to add geometry to a mesh by subdividing the faces into smaller ones without changing the overall shape or volume.

<span class="mw-page-title-main">Shader</span> Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.

In computer graphics, level of detail (LOD) refers to the complexity of a 3D model representation. LOD can be decreased as the model moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. LOD techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.

The computer graphics pipeline, also known as the rendering pipeline or graphics pipeline, is a framework within computer graphics that outlines the necessary procedures for transforming a three-dimensional (3D) scene into a two-dimensional (2D) representation on a screen. Once a 3D model is generated, the graphics pipeline converts the model into a visually perceivable format on the computer display. Due to the dependence on specific software, hardware configurations, and desired display attributes, a universally applicable graphics pipeline does not exist. Nevertheless, graphics application programming interfaces (APIs), such as Direct3D, OpenGL and Vulkan were developed to standardize common procedures and oversee the graphics pipeline of a given hardware accelerator. These APIs provide an abstraction layer over the underlying hardware, relieving programmers from the need to write code explicitly targeting various graphics hardware accelerators like AMD, Intel, Nvidia, and others.

<span class="mw-page-title-main">Polygonal modeling</span> Object modeling method

In 3D computer graphics, polygonal modeling is an approach for modeling objects by representing or approximating their surfaces using polygon meshes. Polygonal modeling is well suited to scanline rendering and is therefore the method of choice for real-time computer graphics. Alternate methods of representing 3D objects include NURBS surfaces, subdivision surfaces, and equation-based representations used in ray tracers.

<span class="mw-page-title-main">Geometry processing</span>

Geometry processing is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator.

Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a human, a humanoid, an animal, a legendary creature or character, etc. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.

<span class="mw-page-title-main">Mesh generation</span> Subdivision of space into cells

Mesh generation is the practice of creating a mesh, a subdivision of a continuous geometric space into discrete geometric and topological cells. Often these cells form a simplicial complex. Usually the cells partition the geometric input domain. Mesh cells are used as discrete local approximations of the larger domain. Meshes are created by computer algorithms, often with human guidance through a GUI, depending on the complexity of the domain and the type of mesh desired. A typical goal is to create a mesh that accurately captures the input domain geometry, with high-quality (well-shaped) cells, and without so many cells as to make subsequent calculations intractable. The mesh should also be fine in areas that are important for the subsequent calculations.

Interactive skeleton-driven simulation is a scientific computer simulation technique used to approximate realistic physical deformations of dynamic bodies in real-time. It involves using elastic dynamics and mathematical optimizations to decide the body-shapes during motion and interaction with forces. It has various applications within realistic simulations for medicine, 3D computer animation and virtual reality.

Computer graphics lighting is the collection of techniques used to simulate light in computer graphics scenes. While lighting techniques offer flexibility in the level of detail and functionality available, they also operate at different levels of computational demand and complexity. Graphics artists can choose from a variety of light sources, models, shading techniques, and effects to suit the needs of each application.

<span class="mw-page-title-main">MakeHuman</span>

MakeHuman is a free and open source 3D computer graphics middleware designed for the prototyping of photorealistic humanoids. It is developed by a community of programmers, artists, and academics interested in 3D character modeling.

Simplygon is 3D computer graphics software for automatic 3D optimization, based on proprietary methods for creating levels of detail (LODs) through Polygon mesh reduction and other optimization techniques.

<span class="mw-page-title-main">3D modeling</span> Form of computer-aided engineering

In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of a surface of an object in three dimensions via specialized software by manipulating edges, vertices, and polygons in a simulated 3D space.

<span class="mw-page-title-main">Tessellation (computer graphics)</span> Computer graphics terminology

In computer graphics, tessellation is the dividing of datasets of polygons presenting objects in a scene into suitable structures for rendering. Especially for real-time rendering, data is tessellated into triangles, for example in OpenGL 4.0 and Direct3D 11.

This is a glossary of terms relating to computer graphics.

<span class="mw-page-title-main">Amitabh Varshney</span> American computer scientist

Amitabh Varshney is an Indian-born American computer scientist. He is an IEEE fellow, and serves as Dean of the University of Maryland College of Computer, Mathematical, and Natural Sciences. Before being named Dean, Varshney was the director of the University of Maryland Institute for Advanced Computer Studies (UMIACS) from 2010 to 2018.

References

  1. M. Chover, J. Gumbau, A. Puig-Centelles, O. Ripolles, F. Ramos (June 2009) "Rendering continuous level-of-detail meshes by Masking Strips" Graphical Models pp.185
  2. "Definition of alpha blending". PCMAG. Retrieved 2021-08-07.
  3. D. Luebke, M. Reddy, J. D. Cohen, A. Varshney, B. Watson, R. Huebner: Level of Detail for 3D Graphics, Morgan Kaufmann, 2002, ISBN   0-321-19496-9
  4. K. Jeong, S. Lee, L. Markosian, A. Ni (September 2005) "Detail control in line drawings of 3D meshes" Springer-Verlag pp.700