Spawning networks

Last updated

Spawning networks are a new class of programmable networks that automate the life cycle process for the creation, deployment, and management of network architecture. These networks represent a groundbreaking approach to the development of programmable networks, enabling the automated creation, deployment, and management of virtual network architectures. This concept revolutionizes the traditional manual and ad hoc process of network deployment, allowing for the dynamic spawning of distinct "child" virtual networks with their own transport, control, and management systems. Spawning networks are capable of operating on a subset of their "parent's" network resources and in isolation from other spawned networks, offering controlled access to communities of users with specific connectivity, security, and quality of service requirements. Their significance lies in their potential to address the limitations of existing network architectures, paving the way for rapid adaptation to new user needs and requirements. By automating the life cycle process for network architectures, spawning networks represent a major advancement in open network control, network programmability, and distributed systems technology.

Contents

By supporting the controlled access to communities of users with specific connectivity, security, and quality of service requirements, spawning networks provide a flexible and scalable solution to meet evolving network demands. Their automated life cycle process for network architectures represents a significant advancement in open network control, network programmability, and distributed systems technology.

Genesis Kernel

The Genesis Kernel plays a pivotal role in enabling the creation, deployment, and management of spawning networks. As a virtual network operating system, the Genesis Kernel has the capability to spawn child network architectures that can support alternative distributed network algorithms and services. It acts as a resource allocator, arbitrating between conflicting requests made by spawned virtual networks, thereby facilitating the efficient utilization of network resources. The Genesis Kernel supports a virtual network life cycle process, which includes the dynamic creation, deployment, and management of virtual network architectures. This process is realized through the interaction of the transport, programming, and life cycle environments, all of which are integral components of the Genesis Kernel framework. Overall, the Genesis Kernel provides the foundational framework and infrastructure necessary for the automated and systematic realization of spawning networks, representing a significant advancement in the field of programmable networks.

The virtual network life cycle

The virtual network life cycle process involves the dynamic creation, deployment, and management of virtual network architectures. It comprises three key phases:

1.Profiling: This phase captures the blueprint of the virtual network architecture, including addressing, routing, signaling, security, control, and management requirements. It generates an executable profiling script that automates the deployment of programmable virtual networks.

2.Spawning: Systematically sets up the network topology, allocates resources, and binds transport, routing, and network management objects to the physical network infrastructure. Based on the profiling script and available network resources, network objects are created and dispatched to network nodes, dynamically creating a new virtual network architecture.

3.Management: Supports virtual network resource management based on per-virtual-network policy to exert control over multiple spawned network architectures. It also facilitates virtual network architecting, allowing the network designer to analyze and refine network objects that characterize the spawned network architecture.

Through these phases, the virtual network life cycle process enables the automated and systematic creation, deployment, and management of virtual network architectures, providing a flexible and scalable approach to network customization and adaptation.

Potential Impacts

Spawning networks have the potential to significantly impact the field of programmable networks by addressing key limitations in existing network architectures. By automating the creation, deployment, and management of virtual network architectures, spawning networks offer several benefits:

1.Flexibility and Adaptability: Spawning networks enable rapid adaptation to new user needs and requirements, allowing for the dynamic spawning of distinct virtual networks with specific connectivity, security, and quality of service requirements.

2.Efficient Resource Utilization: The automated life cycle process for network architectures facilitates the efficient utilization of network resources, optimizing resource allocation and network performance.

3.Scalability: Spawning networks provide a scalable solution for network customization, allowing for the controlled access to communities of users with diverse connectivity and service needs.

4.Automation: By automating the network deployment process, spawning networks reduce the manual effort and time required for architecting and deploying new network architectures, leading to improved operational efficiency.

Implementation Challenges

The implementation of spawning networks presents several challenges and considerations, encompassing both engineering and research issues:

1.Computational Efficiency: Addressing the computational efficiency and performance of spawning networks is crucial, especially in the context of increasing transmission rates. Balancing the computational power needed for routing and congestion control with the requirements of programmable networks is a significant challenge.

2.Performance Optimization: Implementing fast-track and cut-through techniques to offset potential performance costs associated with nested virtual networks is essential. This involves optimizing packet forwarding and hierarchical link sharing design to maintain network performance.

3.Complexity of Profiling: Profiling network architectures and addressing the complexity associated with this process is a key consideration. Developing efficient profiling mechanisms and tools to capture the blueprint of virtual network architectures is a significant engineering challenge.

4.Inheritance and Provisioning: Leveraging existing network objects and architectural components when constructing new child networks introduces challenges related to inheritance and provisioning characteristics. Ensuring efficient inheritance and provisioning of architectural components is a critical research issue.

5.Scalability and Flexibility: Ensuring that spawning networks are scalable, flexible, and capable of meeting the diverse communication needs of distinct communities is a significant engineering consideration. This involves designing spawning networks to efficiently support a wide range of network architectures and services.

6.Resource Management: Efficiently managing network resources to support the introduction and architecting of spawned virtual networks is a critical challenge. This includes addressing resource partitioning, isolation, and the allocation of resources to spawned virtual networks.

Addressing these challenges and considerations is essential for the successful implementation of spawning networks, requiring a combination of engineering innovation and research advancements in the field of programmable networks.

Research and development

The field of programmable networks has seen significant research and development, with several related works and advancements:

1.Open Signaling (Opensig) Community: The Opensig community has been actively involved in designing and developing programmable network prototypes. Their work focuses on modeling communication hardware using open programmable network interfaces to enable third-party software providers to enter the market for telecommunications software.

2.Active Network Program: The DARPA Active Network Program has contributed to the development of active network technologies, exploring the dynamic deployment of network protocols and services.

3.Cellular IP: Research on Cellular IP has been conducted to address the challenges of mobility and programmability in wireless networks, aiming to provide seamless mobility support and efficient network management.

4.NetScript: The NetScript project has explored a language-based approach to active networks, focusing on the development of programming languages and tools for active network environments.

5.Smart Packets for Active Networks: This work has focused on developing smart packets for active networks, aiming to enhance the programmability and intelligence of network packets to support dynamic network services.

6.Survey of Programmable Networks: A comprehensive survey of programmable networks has been conducted, providing insights into the state of the art, challenges, and future directions in the field.

These research efforts and developments have contributed to the advancement of programmable networks, addressing challenges related to network programmability, transportable software, distributed systems technology, and open network control. They have laid the foundation for the development of spawning networks and other innovative approaches to network customization and management.

Overall, spawning networks have the potential to revolutionize the field of programmable networks by offering a systematic and automated approach to network customization, resource control, and adaptation to evolving user demands.

The definition was introduced in a paper titled Spawning Networks, published in IEEE Networks by a group of researchers from Columbia University, University of Hamburg, Intel Corporation, Hitachi Limited, and Nortel Networks.

The authors are Andrew T. Campbell, [1] Michael E. Kounavis, Daniel A. Villela, of Columbia University, John B. Vicente, of Intel Corporation, Hermann G. De Meer, of University of Hamburg, Kazuho Miki, of Hitachi Limited, and Kalai S. Kalaichelvan, [2] of Nortel Networks. [3]

There was also a paper titled "The Genesis Kernel: A Programming System for Spawning Network Architectures", Michael E. Kounavis, Andrew T. Campbell, Stephen Chou, Fabien Modoux, John Vicente and Hao Zhuang. [4]

A first implementation of Spawning Networks was realized at Columbia University as part of the Ph.D thesis work of Michael Kounavis. This implementation is based on the design of the Genesis Kernel, a programming system consisting of three layers: A transport environment which is a collection of programmable virtual routers, a programming environment which offers open access to the programmable data path and a life cycle environment which is responsible for spawning and managing network architectures. One of the concepts used in design of the Genesis Kernel is the creation of a network architecture based on a profiling script specifying the architecture components and their interaction.

Related Research Articles

Execution in computer and software engineering is the process by which a computer or virtual machine reads and acts on the instructions of a computer program. Each instruction of a program is a description of a particular action which must be carried out, in order for a specific problem to be solved. Execution involves repeatedly following a 'fetch–decode–execute' cycle for each instruction done by control unit. As the executing machine follows the instructions, specific effects are produced in accordance with the semantics of those instructions.

Web development is the work involved in developing a website for the Internet or an intranet. Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.

A hypervisor, also known as a virtual machine monitor (VMM) or virtualizer, is a type of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system–level virtualization, where all instances must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.

<span class="mw-page-title-main">Architecture of Windows NT</span> Overview of the architecture of the Microsoft Windows NT line of operating systems

The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a layered design that consists of two main components, user mode and kernel mode. It is a preemptive, reentrant multitasking operating system, which has been designed to work with uniprocessor and symmetrical multiprocessor (SMP)-based computers. To process input/output (I/O) requests, it uses packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting with Windows XP, Microsoft began making 64-bit versions of Windows available; before this, there were only 32-bit versions of these operating systems.

OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, called containers, zones, virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels, or jails. Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container.

Automatically Switched Optical Network (ASON) is a concept for the evolution of transport networks which allows for dynamic policy-driven control of an optical or SDH network based on signaling between a user and components of the network. Its aim is to automate the resource and connection management within the network. The IETF defines ASON as an alternative/supplement to NMS based connection management.

<span class="mw-page-title-main">Protection ring</span> Layer of protection in computer systems

In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults and malicious behavior.

<span class="mw-page-title-main">OpenVZ</span> Operating-system level virtualization technology

OpenVZ is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and LXC.

Competence-based strategic management is a way of thinking about how organizations gain high performance for a significant period of time. Established as a theory in the early 1990s, competence-based strategic management theory explains how organizations can develop sustainable competitive advantage in a systematic and structural way. The theory of competence-based strategic management is an integrative strategy theory that incorporates economic, organizational and behavioural concerns in a framework that is dynamic, systemic, cognitive and holistic. This theory defines competence as: the ability to sustain the coordinated deployment of resources in ways that helps an organization achieve its goals .> Competence-based management can be found in areas other than strategic management, namely in human resource management.

User environment management is the management of a computer user's experience within their desktop environment.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.

<span class="mw-page-title-main">Kernel (operating system)</span> Core of a computer operating system

The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.

HP Cloud Service Automation is cloud management software from Hewlett Packard Enterprise (HPE) that is used by companies and government agencies to automate the management of cloud-based IT-as-a-service, from order, to provision, and retirement. HP Cloud Service Automation orchestrates the provisioning and deployment of complex IT services such as of databases, middleware, and packaged applications. The software speeds deployment of application-based services across hybrid cloud delivery platforms and traditional IT environments.

Software-defined networking (SDN) technology is an approach to network management that enables dynamic, programmatically efficient network configuration to improve network performance and monitoring, in a manner more akin to cloud computing than to traditional network management. SDN is meant to address the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity.

Network functions virtualization (NFV) is a network architecture concept that leverages IT virtualization technologies to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create and deliver communication services.

Cisco Prime is a network management software suite consisting of different software applications by Cisco Systems. Most applications are geared towards either Enterprise or Service Provider networks. There is Cisco Network Registrar among those.

Cloud management is the management of cloud computing products and services.

5G network slicing is a network architecture that enables the multiplexing of virtualized and independent logical networks on the same physical network infrastructure. Each network slice is an isolated end-to-end network tailored to fulfill diverse requirements requested by a particular application.

References

  1. "Home". Andrew T. Campbell.
  2. "EION Inc.- Intelligence in the Air".
  3. Campbell, Andrew T.; Kounavis, Michael E.; Villela, Daniel A.; Vicente, John B.; Corporation, Intel; Miki, Kazuho; Meer, Herman G. De; Kalaichelvan, Kalai S. (1999). "Spawning Networks". IEEE Network. 13 (4): 16–29. CiteSeerX   10.1.1.28.8539 . doi:10.1109/65.777438. ISSN   0890-8044.
  4. Kounavis, M.E.; Campbell, A.T.; Chou, S.; Modoux, F.; Vicente, J.; Hao Zhuang (2001). "The Genesis Kernel: a programming system for spawning network architectures". IEEE Journal on Selected Areas in Communications. 19 (3): 511–526. doi:10.1109/49.917711.