Traffic generation model

Last updated

A traffic generation model is a stochastic model of the traffic flows or data sources in a communication network, for example a cellular network or a computer network. A packet generation model is a traffic generation model of the packet flows or data sources in a packet-switched network. For example, a web traffic model is a model of the data that is sent or received by a user's web-browser. These models are useful during the development of telecommunication technologies, in view to analyse the performance and capacity of various protocols, algorithms and network topologies .

Contents

Application

The network performance can be analyzed by network traffic measurement in a testbed network, using a network traffic generator such as iperf, bwping and Mausezahn. The traffic generator sends dummy packets, often with a unique packet identifier, making it possible to keep track of the packet delivery in the network.

Numerical analysis using network simulation is often a less expensive approach.

An analytical approach using queueing theory may be possible for a simplified traffic model but is often too complicated if a realistic traffic model is used.

The greedy source model

A simplified packet data model is the greedy source model. It may be useful in analyzing the maximum throughput for best-effort traffic (without any quality-of-service guarantees). Many traffic generators are greedy sources.

Poisson traffic model

Another simplified traditional traffic generation model for packet data, is the Poisson process, where the number of incoming packets and/or the packet lengths are modeled as an exponential distribution. When the packets interarrival time is exponential, with constant packet size it resembles an M/D/1 system. When both packet inter arrivals and sizes are exponential, it is an M/M/1 queue: [1]

Long-tail traffic models

However, the Poisson traffic model is memoryless, which means that it does not reflect the bursty nature of packet data, also known as the long-range dependency. For a more realistic model, a self-similar process such as the Pareto distribution can be used as a long-tail traffic model.

Payload data model

The actual content of the payload data is typically not modeled, but replaced by dummy packets. However, if the payload data is to be analyzed on the receiver side, for example regarding bit-error rate, a Bernoulli process is often assumed, i.e. a random sequence of independent binary numbers. In this case, a channel model reflects channel impairments such as noise, interference and distortion.

3GPP2 model

One of the 3GPP2 models is described in. [2] This document describes the following types of traffic flows:

The main idea is to partly implement HTTP, FTP and TCP protocols. For example, an HTTP traffic generator simulates the download of a web-page, consisting of a number of small objects (like images). A TCP stream (that's why TCP generator is a must in this model) is used to download these objects according to HTTP1.0 or HTTP1.1 specifications. These models take into account the details of these protocols' work. The Voice, WAP and Mobile Network Gaming are modelled in a less complicated way.

See also

Related Research Articles

Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

Network throughput refers to the rate of successful message delivery over a communication channel, such as Ethernet or packet radio, in a communication network.The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in data packets per second or data packets per time slot.

A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The delivery, arrival time, and order of arrival of datagrams need not be guaranteed by the network.

In telecommunications and computer networking, a network packet is a formatted unit of data carried by a packet-switched network. A packet consists of control information and user data; the latter is also known as the payload. Control information provides data for delivering the payload. Typically, control information is found in packet headers and trailers.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

Netfilter is a framework provided by the Linux kernel that allows various networking-related operations to be implemented in the form of customized handlers. Netfilter offers various functions and operations for packet filtering, network address translation, and port translation, which provide the functionality required for directing packets through a network and prohibiting packets from reaching sensitive locations within a network.

In computer networks, a tunneling protocol is a communication protocol which allows for the movement of data from one network to another, by exploiting encapsulation. It involves allowing private network communications to be sent across a public network through a process called encapsulation.

Network emulation is a technique for testing the performance of real applications over a virtual network. This is different from network simulation where virtual models of traffic, network models, channels, and protocols are applied. The aim is to assess performance, predict the impact of change, or otherwise optimize technology decision-making.

A long-tailed or heavy-tailed probability distribution is one that assigns relatively high probabilities to regions far from the mean or median. A more formal mathematical definition is given below. In the context of teletraffic engineering a number of quantities of interest have been shown to have a long-tailed distribution. For example, if we consider the sizes of files transferred from a web-server, then, to a good degree of accuracy, the distribution is heavy-tailed, that is, there are a large number of small files transferred but, crucially, the number of very large files transferred remains a major component of the volume downloaded.

Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

<span class="mw-page-title-main">Computer network</span> Network that allows computers to share resources and communicate with each other

A computer network is a set of computers sharing resources located on or provided by network nodes. The computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies, based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.

In computer networks, goodput is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent until the last bit of the last packet is delivered.

In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.

A greedy source is a traffic generator in a communication network that generates data at the maximum rate possible and at the earliest opportunity possible. Each source always has data to transmit, and is never in idle state due to congestion avoidance or other local host traffic shaping. One new data-packet is generated when the transmission of previous packet is completed, meaning that the sender side queue is never congested. A greedy session is a time-limited packet flow or data stream at maximum possible rate.

In packet switching networks, traffic flow, packet flow or network flow is a sequence of packets from a source computer to a destination, which may be another host, a multicast group, or a broadcast domain. RFC 2722 defines traffic flow as "an artificial logical equivalent to a call or connection." RFC 3697 defines traffic flow as "a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that the source desires to label as a flow. A flow could consist of all packets in a specific transport connection or a media stream. However, a flow is not necessarily 1:1 mapped to a transport connection." Flow is also defined in RFC 3917 as "a set of IP packets passing an observation point in the network during a certain time interval." Packet flow temporal efficiency can be affected by one-way delay (OWD) that is described as a combination of the following components:

ngrep Packet analyser

ngrep is a network packet analyzer written by Jordan Ritter. It has a command-line interface, and relies upon the pcap library and the GNU regex library.

Deep content inspection (DCI) is a form of network filtering that examines an entire file or MIME object as it passes an inspection point, searching for viruses, spam, data loss, key words or other content level criteria. Deep Content Inspection is considered the evolution of Deep Packet Inspection with the ability to look at what the actual content contains instead of focusing on individual or multiple packets. Deep Content Inspection allows services to keep track of content across multiple packets so that the signatures they may be searching for can cross packet boundaries and yet they will still be found. An exhaustive form of network traffic inspection in which Internet traffic is examined across all the seven OSI ISO layers, and most importantly, the application layer.

In digital communications networks, packet processing refers to the wide variety of algorithms that are applied to a packet of data or information as it moves through the various network elements of a communications network. With the increased performance of network interfaces, there is a corresponding need for faster packet processing.

Design of robust and reliable networks and network services relies on an understanding of the traffic characteristics of the network. Throughout history, different models of network traffic have been developed and used for evaluating existing and proposed networks and services.

References

  1. "M/D/1, M/M/1 and M/G/1 queuing" (PDF).
  2. CDMA2000 Evaluation Methodology Version 1.0 (Revision 0) Archived 2006-10-14 at the Wayback Machine