It has been suggested that this article be merged into Lag (video games) . (Discuss) Proposed since October 2024. |
Input lag or input latency is the amount of time that passes between sending an electrical signal and the occurrence of a corresponding action.
In video games the term is often used to describe any latency between input and the game engine, monitor, or any other part of the signal chain reacting to that input, though all contributions of input lag are cumulative.
The potential causes for input lag are described below. Each step in the process, however small, increases total input lag, however the combined result may not be noticeable if all input lag is low enough.
For wired controllers, this lag is normally negligible. For wireless controllers, opinions vary as to the significance of this lag. Some people claim to notice extra lag when using a wireless controller, while other people claim that the 4–8 milliseconds of lag is negligible. [1]
A videogame console or PC will send out a new frame once it has finished performing the necessary calculations to create it. The amount of frames rendered per second (on average) is called the frame rate. Using common a 60 Hz monitor as an example, the maximum theoretical frame rate is 60 FPS (frames per second), which means the minimum theoretical input lag for the overall system is approximately 16.67 ms (1 frame/60 FPS). The monitor is usually the bottleneck for the theoretical maximum FPS, since there is little point in rendering more frames than the monitor can show. In situations where the CPU, GPU, memory, bus, etc. load is bottlenecked, FPS can drop below the monitor's refresh rate.
Individual frames need not be finished within the interval of a screen refresh to output at an equivalent rate. Game engines often make use of pipelined architectures to process multiple frames concurrently, allowing for a more efficient use of the underlying hardware. This exacerbates input lag, especially at low frame rates. [2] [3]
This is the lag caused by the television or monitor (also called output lag). In addition to the latency imposed by the screen's pixel response time, any image processing (such as upscaling, motion smoothing, or edge smoothing) takes time and therefore adds more input lag. An input lag below 30 ms is generally considered unnoticeable in a television. [4] Once the frame has been processed, the final step is the updating the pixels to display the correct color for the new frame. The time this takes is called the pixel response time.
Testing has found that overall input lag (from human input to visible response) times of approximately 200 ms are distracting to the user. It also appears that (excluding the monitor/television display lag) 133 ms is an average response time and the most sensitive games (fighting games, first person shooters and rhythm games) achieve response times of 67 ms (excluding display lag). [5]
A computer monitor is an output device that displays information in pictorial or textual form. A discrete monitor comprises a visual display, support electronics, power supply, housing, electrical connectors, and external user controls.
Frame rate, most commonly expressed in frames per second or FPS, is typically the frequency (rate) at which consecutive images (frames) are captured or displayed. This definition applies to film and video cameras, computer animation, and motion capture systems. In these contexts, frame rate may be used interchangeably with frame frequency and refresh rate, which are expressed in hertz. Additionally, in the context of computer graphics performance, FPS is the rate at which a system, particularly a GPU, is able to generate frames, and refresh rate is the frequency at which a display shows completed frames. In electronic camera specifications frame rate refers to the maximum possible rate frames could be captured, but in practice, other settings may reduce the actual frequency to a lower number than the frame rate.
Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the characteristics of the human visual system.
Progressive scanning is a format of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to interlaced video used in traditional analog television systems where only the odd lines, then the even lines of each frame are drawn alternately, so that only half the number of actual image frames are used to produce video. The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, United Kingdom in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s. Progressive scanning became universally used in computer screens beginning in the early 21st century.
The refresh rate, also known as vertical refresh rate or vertical scan rate in reference to terminology originating with the cathode-ray tubes (CRTs), is the number of times per second that a raster-based display device displays a new image. This is independent from frame rate, which describes how many images are stored or generated every second by the device driving the display. On CRT displays, higher refresh rates produce less flickering, thereby reducing eye strain. In other technologies such as liquid-crystal displays, the refresh rate affects only how often the image can potentially be updated.
Deinterlacing is the process of converting interlaced video into a non-interlaced or progressive form. Interlaced video signals are commonly found in analog television, VHS, Laserdisc, digital television (HDTV) when in the 1080i format, some DVD titles, and a smaller number of Blu-ray discs.
High-definition video is video of higher resolution and quality than standard-definition. While there is no standardized meaning for high-definition, generally any video image with considerably more than 480 vertical scan lines or 576 vertical lines (Europe) is considered high-definition. 480 scan lines is generally the minimum even though the majority of systems greatly exceed that. Images of standard resolution captured at rates faster than normal, by a high-speed camera may be considered high-definition in some contexts. Some television series shot on high-definition video are made to look as if they have been shot on film, a technique which is often known as filmizing.
1080p is a set of HDTV high-definition video modes characterized by 1,920 pixels displayed across the screen horizontally and 1,080 pixels down the screen vertically; the p stands for progressive scan, i.e. non-interlaced. The term usually assumes a widescreen aspect ratio of 16:9, implying a resolution of 2.1 megapixels. It is often marketed as Full HD or FHD, to contrast 1080p with 720p resolution screens. Although 1080p is sometimes referred to as 2K resolution, other sources differentiate between 1080p and (true) 2K resolution.
An active shutter 3D system is a technique of displaying stereoscopic 3D images. It works by only presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image.
Screen tearing is a visual artifact in video display where a display device shows information from multiple frames in a single screen draw.
High-motion is the characteristic of video or film footage displayed possessing a sufficiently high frame rate that moving images do not blur or strobe even when tracked closely by the eye. The most common forms of high motion are NTSC and PAL video at their native display rates. Movie film does not portray high motion even when shown on television monitors.
Display motion blur, also called HDTV blur and LCD motion blur, refers to several visual artifacts that are frequently found on modern consumer high-definition television sets and flat-panel displays for computers.
Uncompressed video is digital video that either has never been compressed or was generated by decompressing previously compressed digital video. It is commonly used by video cameras, video monitors, video recording devices, and in video processors that perform functions such as image resizing, image rotation, deinterlacing, and text and graphics overlay. It is conveyed over various types of baseband digital video interfaces, such as HDMI, DVI, DisplayPort and SDI. Standards also exist for the carriage of uncompressed video over computer networks.
In computers, lag is delay (latency) between the action of the user (input) and the reaction of the server supporting the task, which has to be sent back to the client.
Display lag is a phenomenon associated with most types of liquid crystal displays (LCDs) like smartphones and computers and nearly all types of high-definition televisions (HDTVs). It refers to latency, or lag between when the signal is sent to the display and when the display starts to show that signal. This lag time has been measured as high as 68 ms, or the equivalent of 3-4 frames on a 60 Hz display. Display lag is not to be confused with pixel response time, which is the amount of time it takes for a pixel to change from one brightness value to another. Currently the majority of manufacturers quote the pixel response time, but neglect to report display lag.
Netcode is a blanket term most commonly used by gamers relating to networking in online games, often referring to synchronization issues between clients and servers. Players often infer "bad netcodes" when they experience lag or when their inputs are dropped. Common causes of such issues include high latency between server and client, packet loss, network congestion, and external factors independent to network quality such as frame rendering time or inconsistent frame rates. Netcodes may be designed to uphold a synchronous and seamless experience between users despite these networking challenges.
G-Sync is a proprietary adaptive sync technology developed by Nvidia aimed primarily at eliminating screen tearing and the need for software alternatives such as Vsync. G-Sync eliminates screen tearing by allowing a video display's refresh rate to adapt to the frame rate of the outputting device rather than the outputting device adapting to the display, which could traditionally be refreshed halfway through the process of a frame being output by the device, resulting in screen tearing, or two or more frames being shown at once. In order for a device to use G-Sync, it must contain a proprietary G-Sync module sold by Nvidia. AMD has released a similar technology for displays, called FreeSync, which has the same function as G-Sync yet is royalty-free.
FreeSync is an adaptive synchronization technology for LCD and OLED displays that support a variable refresh rate aimed at avoiding tearing and reducing stuttering caused by misalignment between the screen's refresh rate and the content's frame rate.
GPUOpen is a middleware software suite originally developed by AMD's Radeon Technologies Group that offers advanced visual effects for computer games. It was released in 2016. GPUOpen serves as an alternative to, and a direct competitor of Nvidia GameWorks. GPUOpen is similar to GameWorks in that it encompasses several different graphics technologies as its main components that were previously independent and separate from one another. However, GPUOpen is partially open source software, unlike GameWorks which is proprietary and closed.
The GPD Win 2 is a Windows-based palmtop computer that is the successor to the GPD Win.
Input Lag Test: TVs from 2016 + 2017 Dein-Fernseher.de