top of page
  • Writer's pictureNate Otis

Networking in Games

Updated: Jun 7, 2023

Welcome back!


Over the last week, I've spent some time reading up on common networking considerations for video games and I'm going to attempt to condense that learning here now. Right at the top, I want to clarify that the following is my summarization of the various articles and resources that I have found on the topic of gameplay networking over the past week and, while the research I have done so far is certainly not perfunctory, I fully expect that there will be errors in my summary of this information. This is merely meant as a way to showcase my current level of understanding of the topic and, hopefully, to reinforce the concepts that I have learned over the past week.


Now, with that long-winded explanation of my lack of expertise out of the way, let's get into it!


Network Architecture


Games are largely divided into two network architectures, peer-to-peer and client-server models.

Peer-to-Peer Networks

In peer-to-peer networks, data can be sent between any pair of players. There are two forms of peer-to-peer network structures that are common.


The first is where a singular player acts as the "host" and has authority over the simulation and is responsible for distributing data to all other players. The second structure is one where all devices connected to the game are needed to maintain the network and each device shares the load of running the simulation.

Some examples of games that use a P2P model are:

  • Super Smash Bros. Ultimate

  • Bloodborne

  • Animal Crossing: New Horizons

  • Halo: Reach

P2P works fine for some games but it has drawbacks that certain games can't look past:

  • Cheating: There is no "neutral" authority over the game (at least one player has authority) so it is much more difficult to prevent malicious players from being able to manipulate the game

  • Stability: While it is less of a problem in the second P2P model listed above, in situations where a single player is determined to be the "host" the stability and latency of everyone's connections is fully dependent on that player's connection

  • Latency: Assuming that all clients are located physically close to one another, peer-to-peer connections can work well as the physical distance data must travel is greatly reduced. However, the infrastructure that these connections are routed through (often referred to as "last mile internet") typically has slower upload speeds and higher packet loss on average and thus P2P connections can suffer from higher latency.

With these drawbacks in mind, a lot of modern games have utilized a client-server network architecture for their games.


Client-Server Networks

In client-server networks, data is only shared between each client and the server. The server is then responsible for sharing that data with all other clients.


When discussing client-server architectures we often describe them as being either authoritative or non-authoritative (or some hybrid of the two). For a client-server model to be "authoritative" means that the server is the ultimate authority over the state of the simulation and all game logic. In these networks, clients are simply responsible for transmitting player inputs to the server and rendering the results of those validated inputs received from the server.


In a non-authoritative network, each client acts as the authority of its own simulation, receiving updates from other clients from the server. Similarly to P2P models though, non-authoritative client-server networks are extremely susceptible to cheating, and thus most modern competitive games rely on authoritative client-server networks.


Some examples of games that use a client-server model are:

  • Valorant

  • Fortnite

  • Apex Legends

  • Overwatch


Transport Protocol

Now that we've discussed the network architectures that are common to games, we need to discuss how games transport data between clients and the server, also known as the transport protocol.


There are two common internet protocols that are used in games, TCP and UDP. Both of these protocols are built off of IP (Internet Protocol). IP allows for transmission of a "packet" or collection of bytes from one device to another over the internet, but it provides no guarantee that the packet will arrive, that the packet will arrive only once, that the packet is non-corrupted, or that the sequence of bytes in the packet is in the correct order.


TCP

TCP (Transmission Control Protocol) builds off of IP and ensures a reliable, ordered, and error-checked connection between two devices over the internet. However, this reliability is at the cost of speed. In order to ensure that all packets are delivered and delivered in the correct order, the sender of the packet waits to receive an acknowledgment from the receiver that the packet was properly received. If this acknowledgment never comes, the sender attempts to send the packet again.


Additionally, because TCP ensures that packets are received in order if a packet is lost in the process, no further packets can be processed until the lost packet is received, even if the next packets have already reached their destination.


The cost of these checks make TCP very reliable but also very slow. Because of this it is not the best for networked games and most modern games have turned away from TCP sockets in favor of UDP but here are some examples of games that use TCP sockets:

  • Minecraft (Java Edition)

  • World of Warcraft

  • Terraria


UDP

UDP (User Datagram Protocol) is acts a thin layer over IP and thus has many of the same faults as IP: packet loss, packet duplication, packet corruption, and incorrect ordering of packet data. This is all with the added benefit of speed which is immensely important in networked games. Because latency is often so important in multiplayer games, UDP is often employed but with a custom protocol that attempts to alleviate some of the concerns of UDP without paying the full latency costs of TCP. For example, some of these custom protocols will "mark" certain packets as reliable or unreliable so that unreliable packets can be ignored if they are dropped during transport while reliable packets will be prioritized. Other custom protocols will split the data into multiple streams so that lost packets in one stream will not slow down the others (so that a lost chat log packet will not slow down the player input packet for example). Generally speaking, these custom UDP protocols attempt to add reliability in other ways than TCP that make more sense in the context of multiplayer video games.


Application Protocol

Now that we've identified how we will send information back and forth between the client and server, we need to figure out what information to send from the client to the server and from the server to the client as well as what format to send that data in. This is referred to as the Application Protocol.


As I had mentioned previously, the common structure is that clients send their inputs to the server and the server sends the current game state back to all clients. The server should not attempt to send the entire state of the game to the client though. This would take up too much bandwidth and it would allow players to cheat more easily. A player in Fortnite does not need to know the exact location of the players on the opposite side of the map and if they did they could sniff this network information to give them an unfair advantage in the game.


In addition to limiting how much information the server sends about the state of the game to each client, games commonly reduce the physical size of the data that makes up this game state in an attempt to use as little bandwidth as possible to transmit data between server and client.


Serialization

One of the first steps in the process of getting data ready to transmit from a client to the server or vice versa is to convert the data into a format that is preferable for transmission across the internet, commonly binary because of how compact it is.


Compression

The next step is to attempt to compress the data in order to send more data per packet. Some common compression techniques employed in games are:

  • Bit packing: Using exactly the number of bits needed to represent a value. If you have a data member that only has 8 possible values, use 3 bits to represent that value rather than a whole byte.

  • Quantization: A lossy compression technique that involves only using a subset of values to represent a larger whole. This involves losing some amount of precision but in the case where that precision is negligible, it can provide efficiency.

  • Lossless Compression Algorithms: Some common algorithms for lossless compression are Huffman coding (which was used in Quake3), zlib, and run-length encoding

  • Delta Compression: A compression technique which involves sending only the difference in the value since the last game state. This was also utilized in Quake3 and it reduces data sizes by only transmitting the offset in the data, not the entire data value each time

Encryption

All modern games also employ some form of encryption on the data they are transmitting back and forth in order to ensure privacy, cheating prevention, and that the player is who they say they are.


Application Logic

The final step in the networking process is the application logic step, or how to use the data that has been exchanged between the clients and the server to update each of the clients.


Because of the need for the server to validate client inputs in a client-server architecture, there will always be some amount of latency between the time of input on the client and when the client receives the updated game state from the server. And, between the game state received from the server, every object in the world will freeze while it waits for an updated game state from the server.


This is obviously an unacceptable player experience so there are a handful of techniques that are used to mitigate latency in client-server connections. I plan to review these techniques in detail in a later blog post so for now I will simply list them here:

  • Client-side prediction

  • Extrapolation (or dead reckoning)

  • Interpolation

  • Lag Compensation


The ultimate goal of each of these techniques is to attempt to mitigate the amount of latency that players feel while playing networked multiplayer games but that latency cannot be reduced to zero so it is simply an effort of reducing the latency to a point where most players will not notice the difference between the experience and a single-player experience.


And with that, we have reached the conclusion of this week's blog post! Next week I plan to look deeper into the network solutions implemented in common game engines (Unity and Unreal) in order to understand why they made the decisions that they did.


See you then!


Resources used in this week's research:

Network Architecture

Transport Protocol

Application Protocol



18 views0 comments

Recent Posts

See All

Comments


bottom of page