Internet traffic is the flow of data within the entire Internet, or in certain network links of its constituent networks. Common traffic measurements are total volume, in units of multiples of the byte, or as transmission rates in bytes per certain time units.
As the topology of the Internet is not hierarchical, no single point of measurement is possible for total Internet traffic. Traffic data may be obtained from the Tier 1 network providers' peering points for indications of volume and growth. However, Such data excludes traffic that remains within a single service provider's network and traffic that crosses private peering points.
As of December 2022 almost half (48%) of mobile Internet traffic is in India and China, while North America and Europe have about a quarter. [1] However, mobile traffic remains a minority of total internet traffic.
File sharing constitutes a fraction of Internet traffic. [2] The prevalent technology for file sharing is the BitTorrent protocol, which is a peer-to-peer (P2P) system mediated through indexing sites that provide resource directories. According to a Sandvine Research in 2013, Bit Torrent’s share of Internet traffic decreased by 20% to 7.4% overall, reduced from 31% in 2008. [3]
As of 2023, roughly 65% of all internet traffic came from video sites, [4] up from 51% in 2016. [5]
Internet traffic management, also known as application traffic management. The Internet does not employ any formally centralized facilities for traffic management. Its progenitor networks, especially the ARPANET established an early backbone infrastructure which carried traffic between major interchange centers for traffic, resulting in a tiered, hierarchical system of internet service providers (ISPs) within which the tier 1 networks provided traffic exchange through settlement-free peering and routing of traffic to lower-level tiers of ISPs. The dynamic growth of the worldwide network resulted in ever-increasing interconnections at all peering levels of the Internet, so a robust system was developed that could mediate link failures, bottlenecks, and other congestion at many levels.[ citation needed ]
Economic traffic management (ETM) is the term that is sometimes used to point out the opportunities for seeding as a practice that caters to contribution within peer-to-peer file sharing and the distribution of content in the digital world in general. [6]
A planned tax on Internet use in Hungary introduced a 150-forint (US$0.62, €0.47) tax per gigabyte of data traffic, in a move intended to reduce Internet traffic and also assist companies to offset corporate income tax against the new levy. [7] Hungary achieved 1.15 billion gigabytes in 2013 and another 18 million gigabytes accumulated by mobile devices. This would have resulted in extra revenue of 175 billion forints under the new tax based on the consultancy firm eNet. [7]
According to Yahoo News, economy minister Mihály Varga defended the move saying "the tax was fair as it reflected a shift by consumers to the Internet away from phone lines" and that "150 forints on each transferred gigabyte of data – was needed to plug holes in the 2015 budget of one of the EU’s most indebted nations". [8]
Some people argue that the new plan on Internet tax would prove disadvantageous to the country’s economic development, limit access to information and hinder the freedom of expression. [9] Approximately 36,000 people have signed up to take part in an event on Facebook to be held outside the Economy Ministry to protest against the possible tax. [8]
In 1998, the United States enacted the Internet Tax Freedom Act (ITFA) to prevent the imposition of direct taxes on internet usage and online activities such as emails, internet access, bit tax, and bandwidth tax. [10] [11] Initially, this law placed a 10-year moratorium on such taxes, which was later extended multiple times and made permanent in 2016. The ITFA's goal was to protect consumers and support the growth of internet traffic by prohibiting recurring and discriminatory taxes that could hinder internet adoption and usage. As a result, ITFA has played a crucial role in promoting the digital economy and safeguarding consumer interests. According to Pew Research Center, as of 2024, approximately 93% of Americans use the internet, with platforms like YouTube and Facebook being highly popular. [12] [13] [14] Additionally, 90% of U.S. households subscribed to high-speed internet services by 2021. [15] [16] Although the ITFA provides protection against direct internet taxes, ongoing debates about internet regulation and governance continue to shape the landscape of internet traffic and usage in the United States.
Traffic classification describes the methods of classifying traffic by observing features passively in the traffic and line with particular classification goals. There might be some that only have a vulgar classification goal. For example, whether it is bulk transfer, peer-to-peer file-sharing, or transaction-orientated. Some others will set a finer-grained classification goal, for instance, the exact number of applications represented by the traffic. Traffic features included port number, application payload, temporal, packet size, and the characteristic of the traffic. There is a vast range of methods to allocate Internet traffic including exact traffic, for instance, port (computer networking) number, payload, heuristic, or statistical machine learning.
Accurate network traffic classification is elementary to quite a few Internet activities, from security monitoring to accounting and from the quality of service to providing operators with useful forecasts for long-term provisioning. Yet, classification schemes are extremely complex to operate accurately due to the shortage of available knowledge of the network. For example, the packet header-related information is always insufficient to allow for a precise methodology.
Work [17] involving supervised machine learning to classify network traffic. Data are hand-classified (based upon flow content) to one of a number of categories. A combination of data set (hand-assigned) category and descriptions of the classified flows (such as flow length, port numbers, time between consecutive flows) are used to train the classifier. To give a better insight of the technique itself, initial assumptions are made as well as applying two other techniques in reality. One is to improve the quality and separation of the input of information leading to an increase in accuracy of the Naive Bayes classifier technique.
The basis of categorizing work is to classify the type of Internet traffic; this is done by putting common groups of applications into different categories, e.g., "normal" versus "malicious", or more complex definitions, e.g., the identification of specific applications or specific Transmission Control Protocol (TCP) implementations. [18] Adapted from Logg et al. [19]
Traffic classification is a major component of automated intrusion detection systems. [20] [21] They are used to identify patterns as well as an indication of network resources for priority customers, or to identify customer use of network resources that in some way contravenes the operator’s terms of service. Generally deployed Internet Protocol (IP) traffic classification techniques are based approximately on a direct inspection of each packet’s contents at some point on the network. Source address, port and destination address are included in successive IP packets with similar if not the same 5-tuple of protocol type. ort are considered to belong to a flow whose controlling application we wish to determine. Simple classification infers the controlling application’s identity by assuming that most applications consistently use well-known TCP or UDP port numbers. Even though, many candidates are increasingly using unpredictable port numbers. As a result, more sophisticated classification techniques infer application types by looking for application-specific data within the TCP or User Datagram Protocol (UDP) payloads. [22]
Aggregating from multiple sources and applying usage and bitrate assumptions, Cisco, a major network systems company, has published the following historical Internet Protocol (IP) and Internet traffic figures: [23]
Year | IP traffic (PB/month) | Fixed Internet traffic (PB/month) | Mobile Internet traffic (PB/month) |
---|---|---|---|
1990 | 0.001 | 0.001 | n/a |
1991 | 0.002 | 0.002 | n/a |
1992 | 0.005 | 0.004 | n/a |
1993 | 0.01 | 0.01 | n/a |
1994 | 0.02 | 0.02 | n/a |
1995 | 0.18 | 0.17 | n/a |
1996 | 1.9 | 1.8 | n/a |
1997 | 5.4 | 5.0 | n/a |
1998 | 12 | 11 | n/a |
1999 | 28 | 26 | n/a |
2000 | 84 | 75 | n/a |
2001 | 197 | 175 | n/a |
2002 | 405 | 356 | n/a |
2003 | 784 | 681 | n/a |
2004 | 1,477 | 1,267 | n/a |
2005 | 2,426 | 2,055 | 0.9 |
2006 | 3,992 | 3,339 | 4 |
2007 | 6,430 | 5,219 | 15 |
2008 [24] | 10,174 | 8,140 | 33 |
2009 [25] | 14,686 | 10,942 | 91 |
2010 [26] | 20,151 | 14,955 | 237 |
2011 [27] | 30,734 | 23,288 | 597 |
2012 [28] [29] | 43,570 | 31,339 | 885 |
2013 [30] | 51,168 | 34,952 | 1,480 |
2014 [31] | 59,848 | 39,909 | 2,514 |
2015 [32] | 72,521 | 49,494 | 3,685 |
2016 [33] | 96,054 | 65,942 | 7,201 |
2017 [34] | 122,000 | 85,000 | 12,000 |
"Fixed Internet traffic" refers perhaps to traffic from residential and commercial subscribers to ISPs, cable companies, and other service providers. "Mobile Internet traffic" refers perhaps to backhaul traffic from cellphone towers and providers. The overall "Internet traffic" figures, which can be 30% higher than the sum of the other two, perhaps factors in traffic in the core of the national backbone, whereas the other figures seem to be derived principally from the network periphery.
Cisco also publishes 5-year projections.
Year | Fixed Internet traffic (EB/month) | Mobile Internet traffic (EB/month) |
---|---|---|
2018 | 107 | 19 |
2019 | 137 | 29 |
2020 | 174 | 41 |
2021 | 219 | 57 |
2022 | 273 | 77 |
The following data for the Internet backbone in the US comes from the Minnesota Internet Traffic Studies (MINTS): [35]
Year | Data (TB/month) |
---|---|
1990 | 1 |
1991 | 2 |
1992 | 4 |
1993 | 8 |
1994 | 16 |
1995 | n/a |
1996 | 1,500 |
1997 | 2,500–4,000 |
1998 | 5,000–8,000 |
1999 | 10,000–16,000 |
2000 | 20,000–35,000 |
2001 | 40,000–70,000 |
2002 | 80,000–140,000 |
2003 | n/a |
2004 | n/a |
2005 | n/a |
2006 | 450,000–800,000 |
2007 | 750,000–1,250,000 |
2008 | 1,200,000–1,800,000 |
2009 | 1,900,000–2,400,000 |
2010 | 2,600,000–3,100,000 |
2011 | 3,400,000–4,100,000 |
The Cisco data can be seven times higher than the Minnesota Internet Traffic Studies (MINTS) data not only because the Cisco figures are estimates for the global—not just the domestic US—Internet, but also because Cisco counts "general IP traffic (thus including closed networks that are not truly part of the Internet, but use IP, the Internet Protocol, such as the IPTV services of various telecom firms)". [36] The MINTS estimate of US national backbone traffic for 2004, which may be interpolated as 200 petabytes/month, is a plausible three-fold multiple of the traffic of the US's largest backbone carrier, Level(3) Inc., which claims an average traffic level of 60 petabytes/month. [37]
In the past Internet bandwidth in telecommunications networks doubled every 18 months, an observation expressed as Edholm's law. [38] This follows the advances in semiconductor technology, such as metal-oxide-silicon (MOS) scaling, exemplified by the MOSFET transistor, which has shown similar scaling described by Moore's law. In the 1980s, fiber-optical technology using laser light as information carriers accelerated the transmission speed and bandwidth of telecommunication circuits. This has led to the bandwidths of communication networks achieving terabit per second transmission speeds. [39]
The history of the Internet originated in the efforts of scientists and engineers to build and interconnect computer networks. The Internet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France.
The Open Systems Interconnection (OSI) model is a reference model from the International Organization for Standardization (ISO) that "provides a common basis for the coordination of standards development for the purpose of systems interconnection." In the OSI reference model, the communications between systems are split into seven different abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application.
Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.
A router is a computer and networking device that forwards data packets between computer networks, including internetworks such as the global Internet.
In computing, a denial-of-service attack is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to a network. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled. The range of attacks varies widely, spanning from inundating a server with millions of requests to slow its performance, overwhelming a server with a substantial amount of invalid data, to submitting requests with an illegitimate IP address.
Network address translation (NAT) is a method of mapping an IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device. The technique was initially used to bypass the need to assign a new address to every host when a network was moved, or when the upstream Internet service provider was replaced but could not route the network's address space. It has become a popular and essential tool in conserving global address space in the face of IPv4 address exhaustion. One Internet-routable IP address of a NAT gateway can be used for an entire private network.
Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
Deep packet inspection (DPI) is a type of data processing that inspects in detail the data being sent over a computer network, and may take actions such as alerting, blocking, re-routing, or logging it accordingly. Deep packet inspection is often used for baselining application behavior, analyzing network usage, troubleshooting network performance, ensuring that data is in the correct format, checking for malicious code, eavesdropping, and internet censorship, among other purposes. There are multiple headers for IP packets; network equipment only needs to use the first of these for normal operation, but use of the second header is normally considered to be shallow packet inspection despite this definition.
Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements.
NetFlow is a feature that was introduced on Cisco routers around 1996 that provides the ability to collect IP network traffic as it enters or exits an interface. By analyzing the data provided by NetFlow, a network administrator can determine things such as the source and destination traffic, class of service, and the causes of congestion. A typical flow monitoring setup consists of three main components:
In computer networking, link aggregation is the combining of multiple network connections in parallel by any of several methods. Link aggregation increases total throughput beyond what a single connection could sustain, and provides redundancy where all but one of the physical links may fail without losing connectivity. A link aggregation group (LAG) is the combined collection of physical ports.
Network address translation traversal is a computer networking technique of establishing and maintaining Internet Protocol connections across gateways that implement network address translation (NAT).
Optical networking is a means of communication that uses signals encoded in light to transmit information in various types of telecommunications networks. These include limited range local-area networks (LAN) or wide area networks (WANs), which cross metropolitan and regional areas as well as long-distance national, international and transoceanic networks. It is a form of optical communication that relies on optical amplifiers, lasers or LEDs and wavelength-division multiplexing (WDM) to transmit large quantities of data, generally across fiber-optic cables. Because it is capable of achieving extremely high bandwidth, it is an enabling technology for the Internet and telecommunication networks that transmit the vast majority of all human and machine-to-machine information.
Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).
WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.
Peer-to-peer caching is a computer network traffic management technology used by Internet Service Providers (ISPs) to accelerate content delivered over peer-to-peer (P2P) networks while reducing related bandwidth costs.
Data center bridging (DCB) is a set of enhancements to the Ethernet local area network communication protocol for use in data center environments, in particular for use with clustering and storage area networks.
Mobile data offloading is the use of complementary network technologies for delivering data originally targeted for cellular networks. Offloading reduces the amount of data being carried on the cellular bands, freeing bandwidth for other users. It is also used in situations where local cell reception may be poor, allowing the user to connect via wired services with better connectivity.
Traffic classification is an automated process which categorises computer network traffic according to various parameters into a number of traffic classes. Each resulting traffic class can be treated differently in order to differentiate the service implied for the data generator or consumer.