Netcode is a blanket term most commonly used by gamers relating to networking in online games, often referring to synchronization issues between clients and servers. Players often infer "bad netcodes" when they experience lag or when their inputs are dropped. Common causes of such issues include high latency between server and client, packet loss, network congestion, and external factors independent to network quality such as frame rendering time or inconsistent frame rates. [1] [2] Netcodes may be designed to uphold a synchronous and seamless experience between users despite these networking challenges.
Unlike a local game where the inputs of all players are executed instantly in the same simulation or instance of the game, in an online game there are several parallel simulations (one for each player) where the inputs from their respective players are received instantly, while the inputs for the same frame from other players arrive with a certain delay (greater or lesser depending on the physical distance between the players, the quality and speed of the players' network connections, etc.). [3] During an online match, games must receive and process players' input within a certain time for each frame (equal to 16.66 ms per frame at 60 FPS), and if a remote player's input of a particular frame (for example, of frame number 10) arrives when another one is already running (for example, in frame number 20, 166.66 ms later), desynchronization between player simulations is produced. There are two main solutions to resolving this conflict and making the game run smoothly:
The classic solution to this problem is the use of a delay-based netcode. When the inputs of a remote player arrive late, the game delays the inputs of the local player accordingly to synchronize the two inputs and run them simultaneously. This added delay can be disruptive for players (especially when latency is high), but overall the change is not very noticeable. However, these delays can be inconsistent due to sudden fluctuations in current latency. Should the latency between players exceed an established buffer window for the remote player, the game must wait, causing the screens to "freeze". This occurs because a delay-based netcode does not allow the simulation to continue until it receives the inputs from all the players in the frame in question. [4] This variable delay causes an inconsistent and unresponsive experience compared to offline play (or to a LAN game), and can negatively affect player performance in timing-sensitive and fast-paced genres such as fighting games. [5]
An alternative system to the previous netcode is rollback netcode. This system immediately runs the inputs of the local player (so that they are not delayed as with delay-based netcode), as if it were an offline game, and predicts the inputs of the remote player or players instead of waiting for them (assuming they will make the same input as the one in the previous tick). Once these remote inputs arrive (suppose, e.g., 45 ms later), the game can act in two ways: if the prediction is correct, the game continues as-is, in a totally continuous way; if the prediction was incorrect, the game state is reverted and gameplay continues from the corrected state, seen as a "jump" to the other player or players (equivalent to 45 ms, following the example). [1] Some games utilize a hybrid solution in order to disguise these "jumps" (which can become problematic as latency between players grows, as there is less and less time to react to other players' actions) with a fixed input delay and then rollback being used. Rollback is quite effective at concealing lag spikes or other issues related to inconsistencies in the users' connections, as predictions are often correct and players do not even notice. Nevertheless, this system can be troublesome whenever a client's game slows down (usually due to overheating), since rift problems can be caused leading to an exchange of tickets between machines at unequal rates. This generates visual glitches that interrupt the gameplay of those players that receive inputs at a slower pace, while the player whose game is slowed down will have an advantage over the rest by receiving inputs from others at a normal rate (this is known as one-sided rollback). [6] To address this uneven input flow (and consequently, an uneven frame flow as well), there are standard solutions such as waiting for the late entries to arrive to all machines (similar to the delay-based netcode model) or more ingenious[ citation needed ] solutions as the one currently used in Skullgirls , which consists of the systematic omission of one frame every seven so that when the game encounters the problem in question it can recover the skipped frames in order to gradually synchronize the instances of the games on the various machines. [7]
Rollback netcode requires the game engine to be able to turn back its state, which requires modifications to many existing engines, and therefore, the implementation of this system can be problematic and expensive in AAA type games (which usually have a solid engine and a high-traffic network), as commented by Dragon Ball FighterZ producer Tomoko Hiroki, among others. [8]
Although this system is often associated with a peer-to-peer architecture and fighting games, there are forms of rollback networking that are also commonly used in client-server architectures (for instance, aggressive schedulers found in database management systems include rollback functionality) and in other video game genres. [1]
There is a popular MIT-licensed library named GGPO designed to help implement rollback networking to games (mainly fighting games). [9]
Latency is unavoidable in online games, and the quality of the player's experience is strictly tied to this (the more latency there is between players, the greater the feeling that the game is not responsive to their inputs). [1] The latency of the players' network (which is largely out of a game's control) is not the only factor in question, but also the latency inherent in the way the game simulations are run. There are several lag compensation methods used to disguise or cope with latency (especially with high latency values). [10]
A single update of a game simulation is known as a tick. The rate at which the simulation is run on a server is often referred to as the server's tickrate; this is essentially the server equivalent of a client's frame rate, absent any rendering system. [11] Tickrate is limited by the length of time it takes to run the simulation, and is often intentionally limited further to reduce instability introduced by a fluctuating tickrate, and to reduce CPU and data transmission costs. A lower tickrate increases latency in the synchronization of the game simulation between the server and clients. [12] Tickrate for games like first-person shooters is often between 128 ticks per second (such is Valorant's case), 64 ticks per second (in games like Counter-Strike: Global Offensive and Overwatch), 30 ticks per second (like in Fortnite and Battlefield V's console edition) [13] and 20 ticks per second (such are the controversial cases of Call of Duty: Modern Warfare, Call of Duty: Warzone and Apex Legends). [14] [15] A lower tickrate also naturally reduces the precision of the simulation, [11] which itself might cause problems if taken too far, or if the client and server simulations are running at significantly different rates.
Because of limitations in the amount of available bandwidth and the CPU time that's taken by network communication, some games prioritize certain vital communications while limiting the frequency and priority of less important information. As with tickrate, this effectively increases synchronization latency. Game engines may limit the number of times that updates (of a simulation) are sent to a particular client and/or particular objects in the game's world in addition to reducing the precision of some values sent over the network to help with bandwidth use. This lack of precision may in some instances be noticeable. [11] [16]
Various simulation synchronization errors between machines can also fall under the "netcode issues" blanket. These may include bugs which cause the simulation to proceed differently on one machine than on another, or which cause some things to not be communicated when the user perceives that they ought to be. [2] Traditionally, real-time strategy games (such as Age of Empires) have used lockstep protocol peer-to-peer networking models where it is assumed the simulation will run exactly the same on all clients; if, however, one client falls out of step for any reason, the desynchronization may compound and be unrecoverable. [11] [17]
A game's choice of transport layer protocol (and its management and coding) can also affect perceived networking issues.
If a game uses a Transmission Control Protocol (TCP), there will be increased latency between players. This protocol is based on the connection between two machines, in which they can exchange data and read it. These types of connections are very reliable, stable, ordered and easy to implement. These connections, however, are not quite suited to the network speeds that fast-action games require, as this type of protocol (Real Time Streaming Protocols) automatically groups data into packets (which will not be sent until a certain volume of information is reached, unless this algorithm — Nagle's algorithm — is disabled) which will be sent through the connection established between the machines, rather than directly (sacrificing speed for security). This type of protocol also tends to respond very slowly whenever they lose a packet, or when packets arrive in an incorrect order or duplicated, which can be very detrimental to a real-time online game (this protocol was not designed for this type of software).
If the game instead uses a User Datagram Protocol (UDP), the connection between machines will be very fast, because instead of establishing a connection between them the data will be sent and received directly. This protocol is much simpler than the previous one, but it lacks its reliability and stability and requires the implementation of own code to handle indispensable functions for the communication between machines that are handled by TCP (such as data division through packets, automatic packet loss detection, etc.); this increases the engine's complexity and might itself lead to issues. [18]
Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games.
The Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware.
In computer networking, Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. It uses encryption ('hiding') only for its own control messages, and does not provide any encryption or confidentiality of content by itself. Rather, it provides a tunnel for Layer 2, and the tunnel itself may be passed over a Layer 3 encryption protocol such as IPsec.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
In computer networking, a network service is an application running at the network application layer and above, that provides data storage, manipulation, presentation, communication or other capability which is often implemented using a client–server or peer-to-peer architecture based on application layer network protocols.
Exponential backoff is an algorithm that uses feedback to multiplicatively decrease the rate of some process, in order to gradually find an acceptable rate. These algorithms find usage in a wide range of systems and processes, with radio networks and computer networks being particularly notable.
Client-side prediction is a network programming technique used in video games intended to conceal negative effects of high latency connections. The technique attempts to make the player's input feel more instantaneous while governing the player's actions on a remote server.
A game server is a server which is the authoritative source of events in a multiplayer video game. The server transmits enough data about its internal state to allow its connected clients to maintain their own accurate version of the game world for display to players. They also receive and process each player's input.
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
Hole punching is a technique in computer networking for establishing a direct connection between two parties in which one or both are behind firewalls or behind routers that use network address translation (NAT). To punch a hole, each client connects to an unrestricted third-party server that temporarily stores external and internal address and port information for each client. The server then relays each client's information to the other, and using that information each client tries to establish direct connection; as a result of the connections using valid port numbers, restrictive firewalls or routers accept and forward the incoming packets on each side.
In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.
The lockstep protocol is a partial solution to the look-ahead cheating problem in peer-to-peer architecture multiplayer games, in which a cheating client delays their own actions to await the messages of other players. A client can do so by acting as if they're suffering from high latency; the outgoing packet is forged by attaching a time stamp that is prior to the actual moment the packet is sent.
Online games are video games played over a computer network. The evolution of these games parallels the evolution of computers and computer networking, with new technologies improving the essential functionality needed for playing video games on a remote server. Many video games have an online component, allowing players to play against or cooperatively with players across a network around the world.
The Secure Real-Time Media Flow Protocol (RTMFP) is a protocol suite developed by Adobe Systems for encrypted, efficient multimedia delivery through both client-server and peer-to-peer models over the Internet. The protocol was originally proprietary, but was later opened up and is now published as RFC 7016.
In computers, lag is delay (latency) between the action of the user (input) and the reaction of the server supporting the task, which has to be sent back to the client.
Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.
GGPO is middleware designed to help create a near-lagless online experience for various emulated arcade games and fighting games. The program was created by Tony Cannon, co-founder of fighting game community site Shoryuken and the popular Evolution Championship Series.
SoftEther VPN is free open-source, cross-platform, multi-protocol VPN client and VPN server software, developed as part of Daiyuu Nobori's master's thesis research at the University of Tsukuba. VPN protocols such as SSL VPN, L2TP/IPsec, OpenVPN, and Microsoft Secure Socket Tunneling Protocol are provided in a single VPN server. It was released using the GPLv2 license on January 4, 2014. The license was switched to Apache License 2.0 on January 21, 2019.
Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.
Audio Video Bridging (AVB) is a common name for a set of technical standards that provide improved synchronization, low latency, and reliability for switched Ethernet networks. AVB embodies the following technologies and standards: