Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network.
Active network architecture is composed of execution environments (similar to a unix shell that can execute active packets), a node operating system capable of supporting one or more execution environments. It also consists of active hardware, capable of routing or switching as well as executing code within active packets. This differs from the traditional network architecture which seeks robustness and stability by attempting to remove complexity and the ability to change its fundamental operation from underlying network components. Network processors are one means of implementing active networking concepts. Active networks have also been implemented as overlay networks.
Active networking allows the possibility of highly tailored and rapid "real-time" changes to the underlying network operation. This enables such ideas as sending code along with packets of information allowing the data to change its form (code) to match the channel characteristics. The smallest program that can generate a sequence of data can be found in the definition of Kolmogorov complexity. The use of real-time genetic algorithms within the network to compose network services is also enabled by active networking.
Active networking relates to other networking paradigms primarily based upon how computing and communication are partitioned in the architecture.
Active networking is an approach to network architecture with in-network programmability. The name derives from a comparison with network approaches advocating minimization of in-network processing, based on design advice such as the "end-to-end argument". Two major approaches were conceived: programmable network elements ("switches") and capsules, a programmability approach that places computation within packets traveling through the network. Treating packets as programs later became known as "active packets". Software-defined networking decouples the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The concept of a programmable control plane originated at the University of Cambridge in the Systems Research Group, where (using virtual circuit identifiers available in Asynchronous Transfer Mode switches) multiple virtual control planes were made available on a single physical switch. Control Plane Technologies (CPT) was founded to commercialize this concept.
Active network research addresses the nature of how best to incorporate extremely dynamic capability within networks. [1]
In order to do this, active network research must address the problem of optimally allocating computation versus communication within communication networks. [2] A similar problem related to the compression of code as a measure of complexity is addressed via algorithmic information theory.
One of the challenges of active networking has been the inability of information theory to mathematically model the active network paradigm and enable active network engineering. This is due to the active nature of the network in which communication packets contain code that dynamically change the operation of the network. Fundamental advances in information theory are required in order to understand such networks. [3]
As the limit in reduction of transistor size is reached with current technology, active networking concepts are being explored as a more efficient means accomplishing computation and communication. [5] [6] More on this can be found in nanoscale networking.
In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
Internetworking is the practice of interconnecting multiple computer networks, such that any pair of hosts in the connected networks can exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks are called an internetwork, or simply an internet.
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions between networks and on the global Internet. Data sent through a network, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node.
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency (ARPA) of the United States Department of Defense.
Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.
Dataflow architecture is a dataflow-based computer architecture that directly contrasts the traditional von Neumann architecture or control flow architecture. Dataflow architectures have no program counter, in concept: the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions, so that the order of instruction execution may be hard to predict.
A network processor is an integrated circuit which has a feature set specifically targeted at the networking application domain.
The BBN Butterfly was a massively parallel computer built by Bolt, Beranek and Newman in the 1980s. It was named for the "butterfly" multi-stage switching network around which it was built. Each machine had up to 512 CPUs, each with local memory, which could be connected to allow every CPU access to every other CPU's memory, although with a substantially greater latency than for its own. The CPUs were commodity microprocessors. The memory address space was shared.
A network on a chip or network-on-chip is a network-based communications subsystem on an integrated circuit ("microchip"), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules.
Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
Metacomputing is all computing and computing-oriented activity which involves computing knowledge utilized for the research, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of large computer networks/grids, such as the Internet, intranet and other territorially distributed computer networks for special purposes.
Programmable matter is matter which has the ability to change its physical properties in a programmable fashion, based upon user input or autonomous sensing. Programmable matter is thus linked to the concept of a material which inherently has the ability to perform information processing.
A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.
A nanonetwork or nanoscale network is a set of interconnected nanomachines, which are able to perform only very simple tasks such as computing, data storing, sensing and actuation. Nanonetworks are expected to expand the capabilities of single nanomachines both in terms of complexity and range of operation by allowing them to coordinate, share and fuse information. Nanonetworks enable new applications of nanotechnology in the biomedical field, environmental research, military technology and industrial and consumer goods applications. Nanoscale communication is defined in IEEE P1906.1.
Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration in order to improve network performance and monitoring, in a manner more akin to cloud computing than to traditional network management. SDN is meant to address the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers, which are considered the brain of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity.
The Microsystems Technology Office (MTO) is one of seven current organizational divisions of DARPA, an agency responsible for the development of new technology for the United States Armed Forces. It is sometimes referred to as the Microelectronics Technology Office.