Admission control

Last updated

Admission control is a validation process in communication systems where a check is performed before a connection is established to see if current resources are sufficient for the proposed connection.

Contents

Applications

For some applications, dedicated resources (such as a wavelength across an optical network) may be needed in which case admission control has to verify availability of such resources before a request can be admitted. [1]

For more elastic applications, a total volume of resources may be needed prior to some deadline in order to satisfy a new request, in which case admission control needs to verify availability of resources at the time and perform scheduling to guarantee satisfaction of an admitted request. [2] [3] [4]

Admission control systems

Related Research Articles

The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.

<span class="mw-page-title-main">Peer-to-peer</span> Type of decentralized and distributed network architecture

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network, forming a peer-to-peer network of nodes. In addition, a personal area network (PAN) is also in nature a type of decentralized peer-to-peer network typically between two devices.

Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

<span class="mw-page-title-main">Transport layer</span> Layer in the OSI and TCP/IP models providing host-to-host communication services for applications

In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide end-to-end communication services for applications. It provides services such as connection-oriented communication, reliability, flow control, and multiplexing.

Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

The Resource Reservation Protocol (RSVP) is a transport layer protocol designed to reserve resources across a network using the integrated services model. RSVP operates over an IPv4 or IPv6 and provides receiver-initiated setup of resource reservations for multicast or unicast data flows. It does not transport application data but is similar to a control protocol, like Internet Control Message Protocol (ICMP) or Internet Group Management Protocol (IGMP). RSVP is described in RFC 2205.

<span class="mw-page-title-main">Content delivery network</span> Layer in the internet ecosystem addressing bottlenecks

A content delivery network or content distribution network (CDN) is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance ("speed") by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects, downloadable objects, applications, live streaming media, on-demand streaming media, and social media sites.

In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum.

<span class="mw-page-title-main">Computer network</span> Network that allows computers to share resources and communicate with each other

A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.

WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.

Stream Reservation Protocol (SRP) is an enhancement to Ethernet that implements admission control. In September 2010 SRP was standardized as IEEE 802.1Qat which has subsequently been incorporated into IEEE 802.1Q-2011. SRP defines the concept of streams at layer 2 of the OSI model. Also provided is a mechanism for end-to-end management of the streams' resources, to guarantee quality of service (QoS).

A reliable multicast is any computer networking protocol that provides a reliable sequence of packets to multiple recipients simultaneously, making it suitable for applications such as multi-receiver file transfer.

Software-defined networking (SDN) is an approach to network management that uses abstraction to enable dynamic and programmatically efficient network configuration to create grouping and segmentation while improving network performance and monitoring in a manner more akin to cloud computing than to traditional network management. SDN is meant to improve the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity.

Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.

A Software-Defined Wide Area Network (SD-WAN) is a wide area network that uses software-defined networking technology, such as communicating over the Internet using overlay tunnels which are encrypted when destined for internal organization locations.

Bare Machine Computing (BMC) is a computer architecture based on bare machines. In the BMC paradigm, applications run without the support of any operating system (OS) or centralized kernel i.e., no intermediary software is loaded on the bare machine prior to running applications. The applications, which are called bare machine applications or simply BMC applications, do not use any persistent storage or a hard disk, and instead are stored on detachable mass storage such as a USB flash drive. A BMC program consists of a single application or a small set of applications that runs as a single executable within one address space. BMC applications have direct access to the necessary hardware resources. They are self-contained, self-managed and self-controlled entities that boot, load and run without using any other software components or external software. BMC applications have inherent security due to their design. There are no OS-related vulnerabilities, and each application only contains the necessary (minimal) functionality. There is no privileged mode in a BMC system since applications only run in user mode. Also, application code is statically compiled-there is no means to dynamically alter BMC program flow during execution.

<span class="mw-page-title-main">Audio Video Bridging</span> Specifications for synchronized, low-latency streaming through IEEE 802 networks

Audio Video Bridging (AVB) is a common name for a set of technical standards that provide improved synchronization, low latency, and reliability for switched Ethernet networks. AVB embodies the following technologies and standards:

George N. Rouskas is a computer scientist, academic, and author. He is an Alumni Distinguished Graduate Professor and Director of Graduate Programs in the Department of Computer Science at North Carolina State University.

References

  1. M. Sengupta; et al. (2012). "A comparison of wavelength reservation protocols for WDM optical networks". Journal of Network and Computer Applications. 35 (2). ELSEVIER Journal of Network and Computer Applications: 606–618. doi:10.1016/j.jnca.2011.11.014.
  2. S. Kandula; et al. (2014). "Calendaring for Wide Area Networks" (PDF). ACM SIGCOMM.
  3. H. Zhang; et al. (2015). "Guaranteeing Deadlines for Inter-Datacenter Transfers" (PDF). ACM EUROSYS.
  4. M. Noormohammadpour; et al. (2016). "DCRoute: Speeding up Inter-Datacenter Traffic Allocation while Guaranteeing Deadlines". 2016 IEEE 23rd International Conference on High Performance Computing (HiPC). IEEE HiPC. pp. 82–90. arXiv: 1707.04011 . Bibcode:2017arXiv170704011N. doi:10.1109/HiPC.2016.019. ISBN   978-1-5090-5411-4.