Layered queueing network

Last updated

In queueing theory, a discipline within the mathematical theory of probability, a layered queueing network (or rendezvous network [1] ) is a queueing network model where the service time for each job at each service node is given by the response time of a queueing network (and those service times in turn may also be determined by further nested networks). Resources can be nested and queues form along the nodes of the nesting structure. [2] [3] The nesting structure thus defines "layers" within the queueing model. [2]

Layered queueing has applications in a wide range of distributed systems which involve different master/slave, replicated services and client-server components, allowing each local node to be represented by a specific queue, then orchestrating the evaluation of these queues. [2]

For large population of jobs, a fluid limit has been shown in PEPA to be a give good approximation of performance measures. [4]

Related Research Articles

Client–server model Distributed application structure in computing

Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client-server model are email, network printing, and the World Wide Web.

Peer-to-peer Type of decentralized and distributed network architecture

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes.

The Session Initiation Protocol (SIP) is a signaling protocol used for initiating, maintaining, and terminating real-time sessions that include voice, video and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol (IP) networks as well as mobile phone calling over LTE (VoLTE).

Software architecture refers to the fundamental structures of a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations. The architecture of a software system is a metaphor, analogous to the architecture of a building. It functions as a blueprint for the system and the developing project, laying out the tasks necessary to be executed by the design teams.

Queueing theory Mathematical study of waiting lines, or queues

Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

Message-oriented middleware (MOM) is software or hardware infrastructure supporting sending and receiving messages between distributed systems. MOM allows application modules to be distributed over heterogeneous platforms and reduces the complexity of developing applications that span multiple operating systems and network protocols. The middleware creates a distributed communications layer that insulates the application developer from the details of the various operating systems and network interfaces. APIs that extend across diverse platforms and networks are typically provided by MOM.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

Wireless mesh network

A wireless mesh network (WMN) is a communications network made up of radio nodes organized in a mesh topology. It can also be a form of wireless ad hoc network.

Content delivery network

A content delivery network, or content distribution network (CDN), is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet, even as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects, downloadable objects, applications, live streaming media, on-demand streaming media, and social media sites.

Tuxedo is a middleware platform used to manage distributed transaction processing in distributed computing environments. Tuxedo is a transaction processing system or transaction-oriented middleware, or enterprise application server for a variety of systems and programming languages. Developed by AT&T in the 1980s, it became a software product of Oracle Corporation in 2008 when they acquired BEA Systems. Tuxedo is now part of the Oracle Fusion Middleware.

Generalized processor sharing (GPS) is an ideal scheduling algorithm for process schedulers and network schedulers. It is related to the fair-queuing principle which groups packets into classes and shares the service capacity between them. GPS shares this capacity according to some fixed weights.

A wireless ad hoc network (WANET) or Mobile ad hoc network (MANET) is a decentralized type of wireless network. The network is ad hoc because it does not rely on a pre-existing infrastructure, such as routers in wired networks or access points in managed (infrastructure) wireless networks. Instead, each node participates in routing by forwarding data for other nodes, so the determination of which nodes forward data is made dynamically on the basis of network connectivity and the routing algorithm in use.

In queueing theory, a discipline within the mathematical theory of probability, a G-network is an open network of G-queues first introduced by Erol Gelenbe as a model for queueing systems with specific control functions, such as traffic re-routing or traffic destruction, as well as a model for neural networks. A G-queue is a network of queues with several types of novel and useful customers:

In probability theory, a product-form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product of the metric across the different components. Using capital Pi notation a product-form solution has algebraic form

Fork–join queue

In queueing theory, a discipline within the mathematical theory of probability, a fork–join queue is a queue where incoming jobs are split on arrival for service by numerous servers and joined before departure. The model is often used for parallel computations or systems where products need to be obtained simultaneously from different suppliers. The key quantity of interest in this model is usually the time taken to service a complete job. The model has been described as a "key model for the performance analysis of parallel and distributed systems." Few analytical results exist for fork–join queues, but various approximations are known.

In queueing theory, a discipline within the mathematical theory of probability, a fluid queue is a mathematical model used to describe the fluid level in a reservoir subject to randomly determined periods of filling and emptying. The term dam theory was used in earlier literature for these models. The model has been used to approximate discrete models, model the spread of wildfires, in ruin theory and to model high speed data networks. The model applies the leaky bucket algorithm to a stochastic source.

In queueing theory, a discipline within the mathematical theory of probability, the flow-equivalent server method is a divide-and-conquer method to solve product form queueing networks inspired by Norton's theorem for electrical circuits. The network is successively split into two, one portion is reconfigured to a closed network and evaluated.

Polling system

In queueing theory, a discipline within the mathematical theory of probability, a polling system or polling model is a system where a single server visits a set of queues in some order. The model has applications in computer networks and telecommunications, manufacturing and road traffic management. The term polling system was coined at least as early as 1968 and the earliest study of such a system in 1957 where a single repairman servicing machines in the British cotton industry was modelled.

A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.

Transition (computer science)

Transition refers to a computer science paradigm in the context of communication systems which describes the change of communication mechanisms, i.e., functions of a communication system, in particular, service and protocol components. In a transition, communication mechanisms within a system are replaced by functionally comparable mechanisms with the aim to ensure the highest possible quality, e.g., as captured by the quality of service.

References

  1. Neilson, J. E.; Woodside, C. M.; Petriu, D. C.; Majumdar, S. (1995). "Software bottlenecking in client-server systems and rendezvous networks". IEEE Transactions on Software Engineering . 21 (9): 776. CiteSeerX   10.1.1.47.4391 . doi:10.1109/32.464543.
  2. 1 2 3 Franks, G.; Al-Omari, T.; Woodside, M.; Das, O.; Derisavi, S. (2009). "Enhanced Modeling and Solution of Layered Queueing Networks". IEEE Transactions on Software Engineering . 35 (2): 148. doi:10.1109/TSE.2008.74.
  3. Tribastone, M.; Mayer, P.; Wirsing, M. (2010). "Performance Prediction of Service-Oriented Systems with Layered Queueing Networks" (PDF). Leveraging Applications of Formal Methods, Verification, and Validation. LNCS. 6416. p. 51. doi:10.1007/978-3-642-16561-0_12. ISBN   978-3-642-16560-3.
  4. Tribastone, M. (2013). "A Fluid Model for Layered Queueing Networks" (PDF). IEEE Transactions on Software Engineering . 39 (6): 744–756. doi:10.1109/TSE.2012.66. Archived from the original (PDF) on 2016-03-03. Retrieved 2015-09-04.