The Recursive InterNetwork Architecture (RINA) is a new computer network architecture proposed as an alternative to the architecture of the currently mainstream Internet protocol suite. The principles behind RINA were first presented by John Day in his 2008 book Patterns in Network Architecture: A return to Fundamentals. [1] This work is a fresh start, taking into account lessons learned in the 35 years of TCP/IP’s existence, as well as the lessons of OSI’s failure and the lessons of other network technologies of the past few decades, such as CYCLADES, DECnet, and Xerox Network Systems. RINA's fundamental principles are that computer networking is just Inter-Process Communication or IPC, and that layering should be done based on scope/scale, with a single recurring set of protocols, rather than based on function, with specialized protocols. The protocol instances in one layer interface with the protocol instances on higher and lower layers via new concepts and entities that effectively reify networking functions currently specific to protocols like BGP, OSPF and ARP. In this way, RINA claims to support features like mobility, multihoming and quality of service without the need for additional specialized protocols like RTP and UDP, as well as to allow simplified network administration without the need for concepts like autonomous systems and NAT.
RINA is the result of an effort to work out general principles in computer networking that apply in all situations. RINA is the specific architecture, implementation, testing platform and ultimately deployment of the model informally known as the IPC model, [2] although it also deals with concepts and results that apply to any distributed application, not just to networking. Coming from distributed applications, most of the terminology comes from application development instead of networking, which is understandable, given that RINA's fundamental principle is to reduce networking to IPC.
The basic entity of RINA is the Distributed Application Process or DAP, which frequently corresponds to a process on a host. Two or more DAPs constitute a Distributed Application Facility or DAF, as illustrated in Figure 1. These DAPs communicate using the Common Distributed Application Protocol or CDAP, exchanging structured data in the form of objects. These objects are structured in a Resource Information Base or RIB, which provides a naming schema and a logical organization to them. CDAP provides six basic operations on a remote DAP's objects: create, delete, read, write, start and stop.[ citation needed ]
In order to exchange information, DAPs need an underlying facility whose task is to provide and manage IPC services over a certain scope. This facility is another DAF, called a Distributed IPC Facility or DIF. A DIF enables a DAP to allocate flows to one or more DAPs, by just providing the names of the targeted DAPs and the desired QoS parameters such as bounds on data loss and latency, ordered or out-of-order delivery, reliability, and so forth. For example, DAPs may not trust the DIF they are using and may therefore protect their data before writing it to the flow via a SDU protection module, for example by encrypting it. The DAPs of a DIF are called IPC Processes or IPCPs. They have the same generic DAP structure shown in Figure 3, plus some specific tasks to provide and manage IPC. These tasks, as shown in Figure 4, can be divided into three categories, in order of increasing complexity and decreasing frequency:[ citation needed ]
A DAF thus corresponds to the application layer, and a DIF to the layer immediately below, in most contemporary network models, and the three previous task categories correspond to the vast majority of tasks of not just network operations, but network management and even authentication (with some adjustments in responsibility as will be seen below).[ citation needed ]
DIFs, being DAFs, in turn use other underlying DIFs themselves, going all the way down to the physical layer DIF controlling the wires and jacks. This is where the recursion of RINA comes from. All RINA layers have the same structure and components and provide the same functions; they differ only in their scopes, configurations or policies (mirroring the separation of mechanism and policy in operating systems). [3] As shown in Figure 2, RINA networks are usually structured in DIFs of increasing scope. Figure 3 shows an example of how the Web could be structured with RINA: the highest layer is the one closest to applications, corresponding to email or websites; the lowest layers aggregate and multiplex the traffic of the higher layers, corresponding to ISP backbones. Multi-provider DIFs (such as the public Internet or others) float on top of the ISP layers. In this model, three types of systems are distinguished: hosts, which contain DAPs; interior routers, internal to a layer; and border routers, at the edges of a layer, where packets go up or down one layer.
In short, RINA keeps the concepts of PDU and SDU, but instead of layering by function, it layers by scope. Layers correspond not to different responsibilities, but different scales, and the model is specifically designed to be applicable from a single point-to-point Ethernet connection all the way up to the Web. RINA is therefore an attempt to reuse as much theory as possible and eliminate the need for ad-hoc protocol design, and thus reduce the complexity of network construction, management and operation in the process.[ citation needed ]
As explained above, the IP address is too low-level an identifier on which to base multihoming and mobility efficiently, as well as requiring routing tables to be bigger than necessary. RINA literature follows the general theory of Jerry Saltzer on addressing and naming. According to Saltzer, four elements need to be identified: applications, nodes, attachment points and paths. [4] An application can run in one or more nodes and should be able to move from one node to another without losing its identity in the network. A node can be connected to a pair of attachment points and should be able to move between them without losing its identity in the network. A directory maps an application name to a node address, and routes are sequences of node addresses and attachment points. These points are illustrated in Figure 4.
Saltzer took his model from operating systems, but the RINA authors concluded it could not be applied cleanly to internetworks, which can have more than one path between the same pair of nodes (let alone whole networks). Their solution is to model routes as sequences of nodes: at each hop, the respective node chooses the most appropriate attachment point to forward the packet to the next node. Therefore, RINA routes in a two-step process: first, the route as a sequence of node addresses is calculated, and then, for each hop, an appropriate attachment point is selected. These are the steps to generate the forwarding table: forwarding is still performed with a single lookup. Moreover, the last step can be performed more frequently to exploit multihoming for load balancing.[ citation needed ]
With this naming structure, mobility and multihoming are inherently supported [5] if the names have carefully chosen properties:
Applying this naming scheme to RINA with its recursive layers allows the conclusion that mapping application names to node addresses is analogous to mapping node addresses to attachment points. Put simply, at any layer, nodes in the layer above can be seen as applications while nodes in the layer below can be seen as attachment points.
The Internet protocol suite also generally dictates that protocols be designed in isolation, without regard to whether aspects have been duplicated in other protocols and, therefore, whether these can be made into a policy. RINA tries to avoid this by applying the separation of mechanism and policy in operating systems to protocol design. [6] Each DIF uses different policies to provide different classes of quality of service and adapt to the characteristics of either the physical media, if the DIF is low-level, or the applications, if the DIF is high-level.
RINA uses the theory of the Delta-T protocol [7] developed by Richard Watson in 1981. Watson's research suggests that sufficient conditions for reliable transfer are to bound three timers. Delta-T is an example of how this should work: it does not have a connection setup or tear-down. The same research also notes that TCP already uses these timers in its operation, making Delta-T comparatively simpler. Watson's research also suggests that synchronization and port allocation should be distinct functions, port allocation being part of layer management, and synchronization being part of data transfer.
To accommodate security, RINA requires each DIF/DAF to specify a security policy, whose functions are shown in Figure 5. This allows securing not just applications, but backbones and switching fabrics themselves. A public network is simply a special case where the security policy does nothing. This may introduce overhead for smaller networks, but it scales better with larger networks because layers do not need to coordinate their security mechanisms: the current Internet is estimated as requiring around 5 times more distinct security entities than RINA. [8] Among other things, the security policy can also specify an authentication mechanism; this obsoletes firewalls and blacklists because a DAP or IPCP that can't join a DAF or DIF can't transmit or receive. DIFs also do not expose their IPCP addresses to higher layers, preventing a wide class of man-in-the-middle attacks.
The design of the Delta-T protocol itself, with its emphasis on simplicity, is also a factor. For example, since the protocol has no handshake, it has no corresponding control messages that can be forged or state that can be misused, like that in a SYN flood. The synchronization mechanism also makes aberrant behavior more correlated with intrusion attempts, making attacks far easier to detect. [9]
The starting point for a radically new and different network architecture like RINA is an attempt to solve or a response to the following problems which do not appear to have practical or compromise-free solutions with current network architectures, especially the Internet protocol suite and its functional layering as depicted in Figure 6:
Though these problems are far more acutely visible today, there have been precedents and cases almost right from the beginning of the ARPANET, the environment in which the Internet protocol suite was designed:
In 1972, Tinker Air Force Base [15] wanted connections to two different IMPs for redundancy. ARPANET designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony). To the ARPANET, two interfaces of the same host had different addresses; in other words, the address was too low-level to identify a host.
Initial TCP versions performed the error and flow control (current TCP) and relaying and multiplexing (IP) functions in the same protocol. In 1978 TCP was split from IP even though the two layers had the same scope. By 1987, the networking community was well aware of IP fragmentation's problems, to the point of considering it harmful. [16] However, it was not understood as a symptom that TCP and IP were interdependent.
Richard Watson in 1981 provided a fundamental theory of reliable transport [17] whereby connection management requires only timers bounded by a small factor of the Maximum Packet Lifetime (MPL). Based on this theory, Watson et al. developed the Delta-t protocol [7] which allows a connection's state to be determined simply by bounding three timers, with no handshaking. On the other hand, TCP uses both explicit handshaking as well as more limited timer-based management of the connection's state.
Early in 1972 the International Network Working Group (INWG) was created to bring together the nascent network research community. One of the early tasks it accomplished was voting an international network transport protocol, which was approved in 1976. [18] Remarkably, the selected option, as well as all the other candidates, had an architecture composed of three layers of increasing scope: data link (to handle different types of physical media), network (to handle different types of networks) and internetwork (to handle a network of networks), each layer with its own address space. When TCP/IP was introduced it ran at the internetwork layer on top of the Host-IMP Protocol, when running over the ARPANET. But when NCP was shut down, TCP/IP took the network role and the internetwork layer was lost. [19] This explains the need for autonomous systems and NAT today, to partition and reuse ranges of the IP address space to facilitate administration.
The need for an address higher-level than the IP address was well understood since the mid-1970s. However, application names were not introduced and DNS was designed and deployed, continuing to use well-known ports to identify applications. The advent of the web and HTTP created a need for application names, leading to URLs. URLs, however, tie each application instance to a physical interface of a computer and a specific transport connection, since the URL contains the DNS name of an IP interface and TCP port number, spilling the multihoming and mobility problems to applications.[ citation needed ]
Though the problem of congestion control in datagram networks had been known since the 1970s and early 80s, [20] [21] the congestion collapse in 1986 caught the Internet by surprise. What is worse, the adopted congestion control - the Ethernet congestion avoidance scheme, with a few modifications - was put in TCP.
In 1988 IAB recommended using SNMP as the initial network management protocol for the Internet to later transition to the object-oriented approach of CMIP. [22] SNMP was a step backwards in network management, justified as a temporary measure while the required more sophisticated approaches were implemented, but the transition never happened.
In 1992 the IAB produced a series of recommendations to resolve the scaling problems of the IPv4-based Internet: address space consumption and routing information explosion. Three options were proposed: introduce CIDR to mitigate the problem; design the next version of IP (IPv7) based on CLNP; or continue the research into naming, addressing and routing. [23] CLNP was an OSI-based protocol that addressed nodes instead of interfaces, solving the old multihoming problem dating back to the ARPANET, and allowing for better routing information aggregation. CIDR was introduced, but the IETF didn't accept an IPv7 based on CLNP. IAB reconsidered its decision and the IPng process started, culminating with IPv6. One of the rules for IPng was not to change the semantics of the IP address, which continues to name the interface, perpetuating the multihoming problem. [13]
From the publishing of the PNA book in 2008 to 2014, a lot of RINA research and development work has been done. An informal group known as the Pouzin Society, named after Louis Pouzin, [24] coordinates several international efforts.
The RINA research team at Boston University is led by Professors Abraham Matta, John Day and Lou Chitkushev, and has been awarded a number of grants from the National Science Foundation and EC in order to continue investigating the fundamentals of RINA, develop an open source prototype implementation over UDP/IP for Java [25] [26] and experiment with it on top of the GENI infrastructure. [27] [28] BU is also a member of the Pouzin Society and an active contributor to the FP7 IRATI and PRISTINE projects. In addition to this, BU has incorporated RINA concepts and theory in their computer networking courses.
IRATI is an FP7-funded project with 5 partners: i2CAT, Nextworks, iMinds, Interoute and Boston University. It has produced an open source RINA implementation for the Linux OS on top of Ethernet. [29] [30]
PRISTINE is an FP7-funded project with 15 partners: WIT-TSSG, i2CAT, Nextworks, Telefónica I+D, Thales, Nexedi, B-ISDN, Atos, University of Oslo, Juniper Networks, Brno University, IMT-TSP, CREATE-NET, iMinds and UPC. Its main goal is to explore the programmability aspects of RINA to implement innovative policies for congestion control, resource allocation, routing, security and network management.[ citation needed ]
IRINA was funded by the GÉANT3+ open call, and is a project with four partners: iMinds, WIT-TSSG, i2CAT and Nextworks. The main goal of IRINA is to study the use of the Recursive InterNetwork Architecture (RINA) as the foundation of the next generation NREN and GÉANT network architectures. IRINA builds on the IRATI prototype and will compare RINA against current networking state of the art and relevant clean-slate architecture under research; perform a use-case study of how RINA could be better used in the NREN scenarios; and showcase a laboratory trial of the study.[ citation needed ]
The history of the Internet has its origin in the efforts of scientists and engineers to build and interconnect computer networks. The Internet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France.
Internetworking is the practice of interconnecting multiple computer networks, such that any pair of hosts in the connected networks can exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks is called an internetwork, or simply an internet.
The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
The Open Systems Interconnection (OSI) model is a reference model from the International Organization for Standardization (ISO) that "provides a common basis for the coordination of standards development for the purpose of systems interconnection." In the OSI reference model, the communications between systems are split into seven different abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application.
A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The delivery, arrival time, and order of arrival of datagrams need not be guaranteed by the network.
The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency of the United States Department of Defense.
Anycast is a network addressing and routing methodology in which a single IP address is shared by devices in multiple locations. Routers direct packets addressed to this destination to the location nearest the sender, using their normal decision-making algorithms, typically the lowest number of BGP network hops. Anycast routing is widely used by content delivery networks such as web and name servers, to bring their content closer to end users.
The CYCLADES computer network was a French research network created in the early 1970s. It was one of the pioneering networks experimenting with the concept of packet switching and, unlike the ARPANET, was explicitly designed to facilitate internetworking.
The PARC Universal Packet was one of the two earliest internetworking protocol suites; it was created by researchers at Xerox PARC in the mid-1970s. The entire suite provided routing and packet delivery, as well as higher-level functions such as a reliable byte stream, along with numerous applications.
Peter Thomas Kirstein was a British computer scientist who played a role in the creation of the Internet. He made the first internetworking connection on the ARPANET in 1973, by providing a link to British academic networks, and was instrumental in defining and implementing TCP/IP alongside Vint Cerf and Bob Kahn.
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
A network host is a computer or other device connected to a computer network. A host may work as a server offering information resources, services, and applications to users or other hosts on the network. Hosts are assigned at least one network address.
A network socket is a software structure within a network node of a computer network that serves as an endpoint for sending and receiving data across the network. The structure and properties of a socket are defined by an application programming interface (API) for the networking architecture. Sockets are created only during the lifetime of a process of an application running in the node.
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.
In computer networking, the link layer is the lowest layer in the Internet protocol suite, the networking architecture of the Internet. The link layer is the group of methods and communications protocols confined to the link that a host is physically connected to. The link is the physical and logical network component used to interconnect hosts or nodes in the network and a link protocol is a suite of methods and standards that operate only between adjacent network nodes of a network segment.
The International Network Working Group (INWG) was a group of prominent computer science researchers in the 1970s who studied and developed standards and protocols for interconnection of computer networks. Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, its goal was to develop an international standard protocol for internetworking. INWG became a subcommittee of the International Federation for Information Processing (IFIP) the following year. Concepts developed by members of the group contributed to the Protocol for Packet Network Intercommunication proposed by Vint Cerf and Bob Kahn in 1974 and the Transmission Control Protocol and Internet Protocol (TCP/IP) that emerged later.
The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.