Screen tearing

Last updated • 5 min readFrom Wikipedia, The Free Encyclopedia
A typical video tearing artifact (simulated image) Tearing (simulated).jpg
A typical video tearing artifact (simulated image)

Screen tearing [1] is a visual artifact in video display where a display device shows information from multiple frames in a single screen draw. [2]

Contents

The artifact occurs when the video feed to the device is not synchronized with the display's refresh rate. That can be caused by non-matching refresh rates, and the tear line then moves as the phase difference changes (with speed proportional to the difference of frame rates). It can also occur simply from a lack of synchronization between two equal frame rates, and the tear line is then at a fixed location that corresponds to the phase difference. During video motion, screen tearing creates a torn look as the edges of objects (such as a wall or a tree) fail to line up.

Tearing can occur with most common display technologies and video cards and is most noticeable in horizontally-moving visuals, such as in slow camera pans in a movie or classic side-scrolling video games.

Screen tearing is less noticeable when more than two frames finish rendering during the same refresh interval since that means the screen has several narrower tears, instead of a single wider one.

Prevention

Ways to prevent video tearing depend on the display device and video card technology, the software in use, and the nature of the video material. The most common solution is to use multiple buffering.

Most systems use multiple buffering and some means of synchronization of display and video memory refresh cycles. [3]

Option "TearFree" "boolean": disable or enable TearFree updates. This option forces X to perform all rendering to a back buffer before updating the actual display. It requires an extra memory allocation the same size as a framebuffer, the occasional extra copy, and requires Damage tracking. Thus, enabling TearFree requires more memory and is slower (reduced throughput) and introduces a small amount of output latency, but it should not impact input latency. However, the update to the screen is then performed synchronously with the vertical refresh of the display so that the entire update is completed before the display starts its refresh. That is only one frame is ever visible, preventing an unsightly tear between two visible and differing frames. This replicates what the compositing manager should be doing, however, TearFree will redirect the compositor updates (and those of fullscreen games) directly onto the scan out thus incurring no additional overhead in the composited case. Not all compositing managers prevent tearing, and if the outputs are rotated, there will still be tearing without TearFree enabled.

Vertical synchronization

A vertical synchronization is an option in most systems in which the video card is prevented from doing anything visible to the display memory until after the monitor finishes its current refresh cycle.

During the vertical blanking interval, the driver orders the video card to either rapidly copy the off-screen graphics area into the active display area (double buffering), or treat both memory areas as displayable, and simply switch back and forth between them (page flipping).

Nvidia and AMD video adapters provide an 'Adaptive Vsync' option, which will turn on vertical synchronization only when the frame rate of the software exceeds the display's refresh rate, disabling it otherwise. That eliminates the stutter that occurs as the rendering engine frame rate drops below the display's refresh rate. [4]

Alternatively, technologies like FreeSync [5] and G-Sync [6] reverse the concept and adapt the display's refresh rate to the content coming from the computer. Such technologies require specific support from both the video adapter and the display.

Complications

When vertical synchronization is used, the frame rate of the rendering engine gets limited to the video signal frame rate. That feature normally improves video quality but involves trade-offs in some cases.

Judder

Vertical synchronization can also cause artifacts in video and movie presentations since they are generally recorded at frame rates significantly lower than the typical monitor frame rates (24–30 frame/s). When such a movie is played on a monitor set for a typical 60 Hz refresh rate, the video player misses the monitor's deadline fairly frequently, and the interceding frames are displayed slightly faster than intended, resulting in an effect similar to judder. (See Telecine: Frame rate differences.)

Input lag

Video games, which use a wide variety of rendering engines, tend to benefit visually from vertical synchronization since a rendering engine is normally expected to build each frame in real-time, based on whatever the engine's variables specify at the moment a frame is requested. However, because vertical synchronization causes input lag, it interferes with the interactive nature of games, [7] and particularly interferes with games that require precise timing or fast reaction times.

Benchmarking

Lastly, benchmarking a video card or rendering engine generally implies that the hardware and software render the display as fast as possible, without regard to monitor capabilities or resultant video tearing. Otherwise, the monitor and video card throttle the benchmarking program, causing invalid results.

Other techniques

Some graphics systems let the software perform its memory accesses so that they stay at the same time point relative to the display hardware's refresh cycle, known as raster interrupt or racing the beam. In that case, the software writes to the areas of the display that have just been updated, staying just behind the monitor's active refresh point. That allows for copy routines or rendering engines with less predictable throughput as long as the rendering engine can "catch up" with the monitor's active refresh point when it falls behind.

Alternatively, the software can instead stay just ahead of the active refresh point. Depending on how far ahead one chooses to stay, that method may demand code that copies or renders the display at a fixed, constant speed. Too much latency causes the monitor to overtake the software on occasion, leading to rendering artifacts, tearing, etc.

Demo software on classic systems such as the Commodore 64 and ZX Spectrum frequently exploited those techniques because of the predictable nature of their respective video systems to achieve effects that might otherwise be impossible.

Related Research Articles

<span class="mw-page-title-main">Interlaced video</span> Technique for doubling the perceived frame rate of a video display

Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.

<span class="mw-page-title-main">Framebuffer</span> Portion of random-access memory containing a bitmap that drives a video display

A framebuffer is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor.

<span class="mw-page-title-main">Multiple buffering</span> Use of more than one buffer to hold a block of data

In computer science, multiple buffering is the use of more than one buffer to hold a block of data, so that a "reader" will see a complete version of the data, rather than a partially updated version of the data being created by a "writer". It is very commonly used for computer display images. It is also used to avoid the need to use dual-ported RAM (DPRAM) when the readers and writers are different devices.

The refresh rate, also known as vertical refresh rate or vertical scan rate in reference to terminology originating with the cathode-ray tubes (CRTs), is the number of times per second that a raster-based display device displays a new image. This is independent from frame rate, which describes how many images are stored or generated every second by the device driving the display. On CRT displays, higher refresh rates produce less flickering, thereby reducing eye strain. In other technologies such as liquid-crystal displays, the refresh rate affects only how often the image can potentially be updated.

<span class="mw-page-title-main">Scalable Link Interface</span> Brand name; multi-GPU technology by Nvidia

Scalable Link Interface (SLI) is the brand name for a now discontinued multi-GPU technology developed by Nvidia for linking two or more video cards together to produce a single output. SLI is a parallel processing algorithm for computer graphics, meant to increase the available processing power.

<span class="mw-page-title-main">Direct Rendering Infrastructure</span> Framework

The Direct Rendering Infrastructure (DRI) is the framework comprising the modern Linux graphics stack which allows unprivileged user-space programs to issue commands to graphics hardware without conflicting with other programs. The main use of DRI is to provide hardware acceleration for the Mesa implementation of OpenGL. DRI has also been adapted to provide OpenGL acceleration on a framebuffer console without a display server running.

<span class="mw-page-title-main">Active shutter 3D system</span> Method of displaying stereoscopic 3D images

An active shutter 3D system is a technique of displaying stereoscopic 3D images. It works by only presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image.

Tiled rendering is the process of subdividing a computer graphics image by a regular grid in optical space and rendering each section of the grid, or tile, separately. The advantage to this design is that the amount of memory and bandwidth is reduced compared to immediate mode rendering systems that draw the entire frame at once. This has made tile rendering systems particularly common for low-power handheld device use. Tiled rendering is sometimes known as a "sort middle" architecture, because it performs the sorting of the geometry in the middle of the graphics pipeline instead of near the end.

Display motion blur, also called HDTV blur and LCD motion blur, refers to several visual artifacts that are frequently found on modern consumer high-definition television sets and flat panel displays for computers.

<span class="mw-page-title-main">AMD Radeon Software</span> Device driver and utility software package for AMD GPUs and APUs

AMD Radeon Software is a device driver and utility software package for AMD's Radeon graphics cards and APUs. Its graphical user interface is built with Electron and is compatible with 64-bit Windows and Linux distributions.

Display lag is a phenomenon associated with most types of liquid crystal displays (LCDs) like smartphones and computers and nearly all types of high-definition televisions (HDTVs). It refers to latency, or lag between when the signal is sent to the display and when the display starts to show that signal. This lag time has been measured as high as 68 ms, or the equivalent of 3-4 frames on a 60 Hz display. Display lag is not to be confused with pixel response time, which is the amount of time it takes for a pixel to change from one brightness value to another. Currently the majority of manufacturers quote the pixel response time, but neglect to report display lag.

Nvidia Optimus is a computer GPU switching technology created by Nvidia which, depending on the resource load generated by client software applications, will seamlessly switch between two graphics adapters within a computer system in order to provide either maximum performance or minimum power draw from the system's graphics rendering hardware.

Nvidia 3D Vision is a discontinued stereoscopic gaming kit from Nvidia which consists of LC shutter glasses and driver software which enables stereoscopic vision for any Direct3D game, with various degrees of compatibility. There have been many examples of shutter glasses. Electrically controlled mechanical shutter glasses date back to the middle of the 20th century. LCD shutter glasses appeared in the 1980s, one example of which is Sega's SegaScope. This was available for Sega's game console, the Master System. The NVIDIA 3D Vision gaming kit introduced in 2008 made this technology available for mainstream consumers and PC gamers.

Adaptive tile refresh is a computer graphics technique for side-scrolling video games. It was most famously used by id Software's John Carmack in games such as Commander Keen to compensate for the poor graphics performance of PCs in the early 1990s. Its principal innovation is a novel use of several EGA hardware features to perform the scrolling in hardware. The technique is named for its other aspect, the tracking of moved graphical elements in order to minimize the amount of redrawing required in every frame. Together, the combination saves the processing time that would be otherwise required for redrawing the entire screen. Carmack designed the software engine based on a scrolling display for large images from the 1970s.

<span class="mw-page-title-main">GeForce 600 series</span> Series of GPUs by Nvidia

The GeForce 600 series is a series of graphics processing units developed by Nvidia, first released in 2012. They served as the introduction of the Kepler architecture.

Input lag or input latency is the amount of time that passes between sending an electrical signal and the occurrence of a corresponding action.

Nvidia NVENC is a feature in Nvidia graphics cards that performs video encoding, offloading this compute-intensive task from the CPU to a dedicated part of the GPU. It was introduced with the Kepler-based GeForce 600 series in March 2012.

G-Sync is a proprietary adaptive sync technology developed by Nvidia aimed primarily at eliminating screen tearing and the need for software alternatives such as Vsync. G-Sync eliminates screen tearing by allowing a video display's refresh rate to adapt to the frame rate of the outputting device rather than the outputting device adapting to the display, which could traditionally be refreshed halfway through the process of a frame being output by the device, resulting in screen tearing, or two or more frames being shown at once. In order for a device to use G-Sync, it must contain a proprietary G-Sync module sold by Nvidia. AMD has released a similar technology for displays, called FreeSync, which has the same function as G-Sync yet is royalty-free.

<span class="mw-page-title-main">FreeSync</span> Brand name for an adaptive synchronization technology

FreeSync is an adaptive synchronization technology for LCD and OLED displays that support a variable refresh rate aimed at avoiding tearing and reducing stuttering caused by misalignment between the screen's refresh rate and the content's frame rate.

Variable refresh rate (VRR) refers to a dynamic display that can continuously and seamlessly change its refresh rate without user input. A display supporting a variable refresh rate usually supports a specific range of refresh rates. This is called the variable refresh rate range. The refresh rate can continuously vary seamlessly anywhere within this range.

References

  1. Shroff, Lisa (October 23, 2022). "What is Screen Tearing? How to Fix IT!". GPUinsiders.
  2. How to fight tearing, virtualdub.org, 2005-10-31, archived from the original on 2015-05-30, retrieved 2015-05-19
  3. Vsync to Solve Screen Tearing
  4. Adaptive VSync, nvidia.com, retrieved 2014-01-28
  5. https://www.amd.com/en/technologies/free-sync [ bare URL ]
  6. "Nvidia G-Sync is a smooth move for PC games".
  7. Derek Wilson (2009-07-16), Exploring Input Lag Inside and Out, AnandTech, retrieved 2012-01-15