Transition refers to a computer science paradigm in the context of communication systems which describes the change of communication mechanisms, i.e., functions of a communication system, in particular, service and protocol components. In a transition, communication mechanisms within a system are replaced by functionally comparable mechanisms with the aim to ensure the highest possible quality, e.g., as captured by the quality of service.
Transitions enable communication systems to adapt to changing conditions during runtime. This change in conditions can, for example, be a rapid increase in the load on a certain service that may be caused, e.g., by large gatherings of people with mobile devices. A transition often impacts multiple mechanisms at different communication layers of a layered architecture.
Mechanisms are given as conceptual elements of a networked communication system and are linked to specific functional units, for example, as a service or protocol component. In some cases, a mechanism can also comprise an entire protocol. For example on the transmission layer, LTE can be regarded as such a mechanism. Following this definition, there exist numerous communication mechanisms that are partly equivalent in their basic functionality, such as Wi-Fi, Bluetooth and Zigbee for local wireless networks and UMTS and LTE for broadband wireless connections. For example, LTE and Wi-Fi have equivalent basic functionality, but they are technologically significantly different in their design and operation. Mechanisms affected by transitions are often components of a protocol or service. For example, in case of video streaming/transmission, the use of different video data encoding can be carried out depending on the available data transmission rate. These changes are controlled and implemented by transitions; A research example is a context-aware video adaptation service to support mobile video applications. [1] Through analyzing the current processes in a communication system, it is possible to determine which transitions need to be executed at which communication layer in order to meet the quality requirements. In order for communication systems to adapt to the respective framework conditions, architectural approaches of self-organizing, adaptive systems can be used, such as the MAPE cycle [2] (Monitor-Analyze-Plan-Execute). This central concept of Autonomic Computing can be used to determine the state of the communication system, to analyze the monitoring data and to plan and execute the necessary transition(s). A central goal is that users do not consciously perceive a transition while running applications and that the functionality of the used services is perceived as smooth and fluid.
The study of new and fundamental design methods, models and techniques that enable automated, coordinated and cross-layer transitions between functionally similar mechanisms within a communication system is the main goal of a collaborative research center funded by the German research foundation (DFG). The DFG collaborative research center 1053 MAKI - Multi-mechanism Adaptation for the future Internet - focuses on research questions in the following areas: (i) Fundamental research on transition methods, (ii) Techniques for adapting transition-capable communication systems on the basis of achieved and targeted quality, and (iii) specific and exemplary transitions in communication systems as regarded from different technical perspectives.
A formalization of the concept of transitions that captures the features and relations within a communication system to express and optimize the decision making process that is associated with such a system is given in. [3] The associated building blocks comprise (i) Dynamic Software Product Lines, (ii) Markov Decision Processes and (iii) Utility Design. While Dynamic Software Product Lines provide a method to concisely capture a large configuration space and to specify run time variability of adaptive systems, Markov Decision Processes provide a mathematical tool to define and plan transitions between available communication mechanisms. Finally, utility functions quantify the performance of individual configurations of the transition-based communication system and provide the means to optimize the performance in such a system.
Applications of the idea of transitions have found their way to wireless sensor networks [4] and mobile networks, [5] distributed reactive programming, [6] [7] WiFi firmware modification, [8] planning of autonomic computing systems, [9] analysis of CDNs, [10] flexible extensions of the ISO OSI stack, [11] 5G mmWave vehicular communications, [12] [13] the analysis of MapReduce-like parallel systems, [14] scheduling of Multipath TCP, [15] adaptivity for beam training in 802.11ad, [16] operator placement in dynamic user environments, [17] DASH video player analysis, [18] adaptive bitrate streaming [19] and complex event processing on mobile devices. [20]
A Byzantine fault is a condition of a system, particularly a distributed computing system, where a fault occurs such that different symptoms are presented to different observers, including imperfect information on whether a system component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, to avoid catastrophic failure of a system, the system's actors must agree on a strategy, but some of these actors are unreliable in such a way as to cause other (good) actors to disagree on the strategy and they may be unaware of the disagreement.
Clock synchronization is a topic in computer science and engineering that aims to coordinate otherwise independent clocks. Even when initially set accurately, real clocks will differ after some amount of time due to clock drift, caused by clocks counting time at slightly different rates. There are several problems that occur as a result of clock rate differences and several solutions, some being more acceptable than others in certain contexts.
In computer networking, a heterogeneous network is a network connecting computers and other devices where the operating systems and protocols have significant differences. For example, local area networks (LANs) that connect Windows, Linux and Macintosh computers are heterogeneous.
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations.
Aircrack-ng is a network software suite consisting of a detector, packet sniffer, WEP and WPA/WPA2-PSK cracker and analysis tool for 802.11 wireless LANs. It works with any wireless network interface controller whose driver supports raw monitoring mode and can sniff 802.11a, 802.11b and 802.11g traffic. Packages are released for Linux and Windows.
M. Dale Skeen is an American computer scientist. He specializes in designing and implementing large-scale computing systems, distributed computing and database management systems.
DREAM is an ad hoc location-based routing protocol. DREAM stands for Distance Routing Effect Algorithm for Mobility.
Ian F. Akyildiz is a Turkish-American electrical engineer. He received his BS, MS, and PhD degrees in Electrical and Computer Engineering from the University of Erlangen-Nürnberg, Germany, in 1978, 1981 and 1984, respectively. Currently, he is the President and CTO of the Truva Inc. since March 1989. He retired from the School of Electrical and Computer Engineering (ECE) at Georgia Tech in 2021 after almost 35 years service as Ken Byers Chair Professor in Telecommunications and Chair of the Telecom group.
A gossip protocol or epidemic protocol is a procedure or process of computer peer-to-peer communication that is based on the way epidemics spread. Some distributed systems use peer-to-peer gossip to ensure that data is disseminated to all members of a group. Some ad-hoc networks have no central registry and the only way to spread common data is to rely on each member to pass it along to their neighbors.
Hari Balakrishnan is the Fujitsu Professor of Computer Science and Artificial Intelligence in the Department of Electrical Engineering and Computer Science at MIT, and the Co-founder and CTO at Cambridge Mobile Telematics.
Software-defined networking (SDN) is an approach to network management that uses abstraction to enable dynamic and programmatically efficient network configuration to create grouping and segmentation while improving network performance and monitoring in a manner more akin to cloud computing than to traditional network management. SDN is meant to improve the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity.
Subhash Suri is an Indian-American computer scientist, a professor at the University of California, Santa Barbara. He is known for his research in computational geometry, computer networks, and algorithmic game theory.
Ramesh Govindan is an Indian-American professor of computer science. He is the Northrop Grumman Chair in Engineering and Professor of Computer Science and Electrical Engineering at the University of Southern California.
Associativity-based routing is a mobile routing protocol invented for wireless ad hoc networks, also known as mobile ad hoc networks (MANETs) and wireless mesh networks. ABR was invented in 1993, filed for a U.S. patent in 1996, and granted the patent in 1999. ABR was invented by Chai Keong Toh while doing his Ph.D. at Cambridge University.
Subsea Internet of Things (SIoT) is a network of smart, wireless sensors and smart devices configured to provide actionable operational intelligence such as performance, condition and diagnostic information. It is coined from the term The Internet of Things (IoT). Unlike IoT, SIoT focuses on subsea communication through the water and the water-air boundary. SIoT systems are based around smart, wireless devices incorporating Seatooth radio and Seatooth Hybrid technologies. SIoT systems incorporate standard sensors including temperature, pressure, flow, vibration, corrosion and video. Processed information is shared among nearby wireless sensor nodes. SIoT systems are used for environmental monitoring, oil & gas production control and optimisation and subsea asset integrity management. Some features of IoT's share similar characteristics to cloud computing. There is also a recent increase of interest looking at the integration of IoT and cloud computing. Subsea cloud computing is an architecture design to provide an efficient means of SIoT systems to manage large data sets. It is an adaption of cloud computing frameworks to meet the needs of the underwater environment. Similarly to fog computing or edge computing, critical focus remains at the edge. Algorithms are used to interrogate the data set for information which is used to optimise production.
Ralf Steinmetz is a German computer scientist and electrical engineer. He is professor of multimedia communication at the Technische Universität Darmstadt.
Zygmunt J. Haas is a professor and distinguished chair in computer science, University of Texas at Dallas (UTD) also the professor emeritus in electrical and computer engineering, Cornell University. His research interests include ad hoc networks, wireless networks, sensor networks, and zone routing protocols.
Matthias Grossglauser is a Swiss communication engineer. He is a professor of computer science at EPFL and co-director of the Information and Network Dynamics Laboratory (INDY) at EPFL's School of Computer and Communication Sciences School of Basic Sciences.
Saverio Mascolo is an Italian information engineer, academic and researcher. He is the former Head of the Department of Electrical Engineering and Information Science and the professor of Automatic Control at Department of Ingegneria Elettrica e dell'Informazione (DEI) at Politecnico di Bari, Italy.
Can Emre Koksal is an electrical engineer, computer scientist, academic, and entrepreneur. He is the Founder and CEO of Datanchor, and a professor of Electrical and Computer Engineering at Ohio State University.