Network planning and design

Last updated

Network planning and design is an iterative process, encompassing topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new telecommunications network or service meets the needs of the subscriber and operator. [1] The process can be tailored according to each new network or service. [2]

Contents

A network planning methodology

A traditional network planning methodology in the context of business decisions involves five layers of planning, namely:

Each of these layers incorporates plans for different time horizons, i.e. the business planning layer determines the planning that the operator must perform to ensure that the network will perform as required for its intended life-span. The Operations and Maintenance layer, however, examines how the network will run on a day-to-day basis.

The network planning process begins with the acquisition of external information. This includes:

Planning a new network/service involves implementing the new system across the first four layers of the OSI Reference Model. [1] Choices must be made for the protocols and transmission technologies. [1] [2]

The network planning process involves three main steps:

These steps are performed iteratively in parallel with one another. [1] [2]

The role of forecasting

During the process of Network Planning and Design, estimates are made of the expected traffic intensity and traffic load that the network must support. [1] If a network of a similar nature already exists, traffic measurements of such a network can be used to calculate the exact traffic load. [2] If there are no similar networks, then the network planner must use telecommunications forecasting methods to estimate the expected traffic intensity. [1]

The forecasting process involves several steps: [1]

Dimensioning

Dimensioning a new network determines the minimum capacity requirements that will still allow the Teletraffic Grade of Service (GoS) requirements to be met. [1] [2] To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during the day during which traffic intensity is at its peak. [1]

The dimensioning process involves determining the network’s topology, routing plan, traffic matrix, and GoS requirements, and using this information to determine the maximum call handling capacity of the switches, and the maximum number of channels required between the switches. [1] This process requires a complex model that simulates the behavior of the network equipment and routing protocols.

A dimensioning rule is that the planner must ensure that the traffic load should never approach a load of 100 percent. [1] To calculate the correct dimensioning to comply with the above rule, the planner must take on-going measurements of the network’s traffic, and continuously maintain and upgrade resources to meet the changing requirements. [1] [2] Another reason for overprovisioning is to make sure that traffic can be rerouted in case a failure occurs in the network.

Because of its complexity, network dimensioning is typically done using specialized software tools. Whereas researchers typically develop custom software to study a particular problem, network operators typically make use of commercial network planning software.

Traffic engineering

Compared to network engineering, which adds resources such as links, routers, and switches into the network, traffic engineering targets changing traffic paths on the existing network to alleviate traffic congestion or accommodate more traffic demand.

This technology is critical when the cost of network expansion is prohibitively high and the network load is not optimally balanced. The first part provides financial motivation for traffic engineering while the second part grants the possibility of deploying this technology.

Survivability

Network survivability enables the network to maintain maximum network connectivity and quality of service under failure conditions. It has been one of the critical requirements in network planning and design. It involves design requirements on topology, protocol, bandwidth allocation, etc.. Topology requirement can be maintaining a minimum two-connected network against any failure of a single link or node. Protocol requirements include using a dynamic routing protocol to reroute traffic against network dynamics during the transition of network dimensioning or equipment failures. Bandwidth allocation requirements pro-actively allocate extra bandwidth to avoid traffic loss under failure conditions. This topic has been actively studied in conferences, such as the International Workshop on Design of Reliable Communication Networks (DRCN). [3]

Data-driven network design

More recently, with the increasing role of Artificial Intelligence technologies in engineering, the idea of using data to create data-driven models of existing networks has been proposed. [4] By analyzing large network data, also the less desired behaviors that may occur in real-world networks can be understood, worked around, and avoided in future designs.

Both the design and management of networked systems can be improved by data-driven paradigm. [5] Data-driven models can also be used at various phases of service and network management life cycle such as service instantiation, service provision, optimization, monitoring, and diagnostic. [6]

See also

Related Research Articles

<span class="mw-page-title-main">Ethernet</span> Computer networking technology

Ethernet is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.

Multiprotocol Label Switching (MPLS) is a routing technique in telecommunications networks that directs data from one node to the next based on labels rather than network addresses. Whereas network addresses identify endpoints, the labels identify established paths between endpoints. MPLS can encapsulate packets of various network protocols, hence the multiprotocol component of the name. MPLS supports a range of access technologies, including T1/E1, ATM, Frame Relay, and DSL.

<span class="mw-page-title-main">Network topology</span> Arrangement of a communication network

Network topology is the arrangement of the elements of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses and computer networks.

<span class="mw-page-title-main">Load balancing (computing)</span> Set of techniques to improve the distribution of workloads across multiple computing resources

In computing, load balancing is the process of distributing a set of tasks over a set of resources, with the aim of making their overall processing more efficient. Load balancing can optimize response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.

<span class="mw-page-title-main">VLAN</span> Network communications domain that is isolated at the data link layer

A virtual local area network (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer. In this context, virtual refers to a physical object recreated and altered by additional logic, within the local area network. Basically, a VLAN behaves like a virtual switch or network link that can share the same physical structure with other VLANs while staying logically separate from them. Between network devices, VLANs work by applying tags to network frames and handling these tags in networking systems –creating the appearance and functionality of network traffic that is physically on a single network but acts as if it were split between separate networks. In this way, VLANs can keep network applications separate despite being connected to the same physical network, and without requiring multiple sets of cabling and networking devices to be deployed.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

<span class="mw-page-title-main">Ring network</span> Network topology in which nodes form a ring

A ring network is a network topology in which each node connects to exactly two other nodes, forming a single continuous pathway for signals through each node – a ring. Data travels from node to node, with each node along the way handling every packet.

<span class="mw-page-title-main">RapidIO</span> High-speed interconnect technology

The RapidIO architecture is a high-performance packet-switched electrical connection technology. It supports messaging, read/write and cache coherency semantics. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect.

<span class="mw-page-title-main">Profinet</span> Computer network protocol

Profinet is an industry technical standard for data communication over Industrial Ethernet, designed for collecting data from, and controlling equipment in industrial systems, with a particular strength in delivering data under tight time constraints. The standard is maintained and supported by Profibus and Profinet International, an umbrella organization headquartered in Karlsruhe, Germany.

<span class="mw-page-title-main">Computer network</span> Network that allows computers to share resources and communicate with each other

A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.

An application delivery network (ADN) is a suite of technologies that, when deployed together, provide availability, security, visibility, and acceleration for Internet applications such as websites. ADN components provide supporting functionality that enables website content to be delivered to visitors and other users of that website, in a fast, secure, and reliable way.

<span class="mw-page-title-main">Multi-link trunking</span> Network link aggregation technology

Multi-link trunking (MLT) is a link aggregation technology developed at Nortel in 1999. It allows grouping several physical Ethernet links into one logical Ethernet link to provide fault-tolerance and high-speed links between routers, switches, and servers.

The Time-Triggered Ethernet standard defines a fault-tolerant synchronization strategy for building and maintaining synchronized time in Ethernet networks, and outlines mechanisms required for synchronous time-triggered packet switching for critical integrated applications and integrated modular avionics (IMA) architectures. SAE International released SAE AS6802 in November 2011.

<span class="mw-page-title-main">CANaerospace</span>

CANaerospace is a higher layer protocol based on Controller Area Network (CAN) which has been developed by Stock Flight Systems in 1998 for aeronautical applications.

A distribution management system (DMS) is a collection of applications designed to monitor and control the electric power distribution networks efficiently and reliably. It acts as a decision support system to assist the control room and field operating personnel with the monitoring and control of the electric distribution system. Improving the reliability and quality of service in terms of reducing power outages, minimizing outage time, maintaining acceptable frequency and voltage levels are the key deliverables of a DMS. Given the complexity of distribution grids, such systems may involve communication and coordination across multiple components. For example, the control of active loads may require a complex chain of communication through different components as described in US patent 11747849B2

<span class="mw-page-title-main">Data grid</span> Set of services used to access, modify and transfer geographical data

A data grid is an architecture or set of services that allows users to access, modify and transfer extremely large amounts of geographically distributed data for research purposes. Data grids make this possible through a host of middleware applications and services that pull together data and resources from multiple administrative domains and then present it to users upon request.

Content delivery network interconnection (CDNI) is a set of interfaces and mechanisms required for interconnecting two independent content delivery networks (CDNs) that enables one to deliver content on behalf of the other. Interconnected CDNs offer many benefits, such as footprint extension, reduced infrastructure costs, higher availability, etc., for content service providers (CSPs), CDNs, and end users. Among its many use cases, it allows small CDNs to interconnect and provides services for CSPs that allows them to compete against the CDNs of global CSPs.

A Wireless Data center is a type of data center that uses wireless communication technology instead of cables to store, process and retrieve data for enterprises. The development of Wireless Data centers arose as a solution to growing cabling complexity and hotspots. The wireless technology was introduced by Shin et al., who replaced all cables with 60 GHz wireless connections at the Cayley data center.

Deterministic Networking (DetNet) is an effort by the IETF DetNet Working Group to study implementation of deterministic data paths for real-time applications with extremely low data loss rates, packet delay variation (jitter), and bounded latency, such as audio and video streaming, industrial automation, and vehicle control.

Computer network engineering is a tech discipline within engineering that deals with the design, implementation, and management of computer networks. These systems form the backbone of modern digital communication and consist of both physical components, such as routers, switches, and cables, and logical elements, such as protocols and network services. Computer network engineers ensure that data is transmitted efficiently, securely, and reliably over both local area networks (LANs) and wide area networks (WANs), as well as the broader Internet.

References

  1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Penttinen A., Chapter 10 – Network Planning and Dimensioning, Lecture Notes: S-38.145 - Introduction to Teletraffic Theory, Helsinki University of Technology, Fall 1999.
  2. 1 2 3 4 5 6 7 Farr R.E., Telecommunications Traffic, Tariffs and Costs – An Introduction For Managers, Peter Peregrinus Ltd, 1988.
  3. International Workshop on Design of Reliable Communication Networks, DRCN
  4. C. Fortuna, E. De Poorter, P. Škraba, I. Moerman, Data-Driven Wireless Network Design: A Multi-level Modeling Approach, Wireless Personal Communications, May 2016, Volume 88, Issue 1, pp 63–77.
  5. J. Jiang, V. Sekar, I. Stoica, H. Zhang, Unleashing the Potential of Data-Driven Networking, Springer LNCS vol LNCS, volume 10340, September 2017.
  6. An Architecture for Data Model-Driven Network Management: The Network Virtualization Case, IETF draft.