This article's use of external links may not follow Wikipedia's policies or guidelines.(May 2019) |
Future Internet is a general term for research activities on new architectures for the Internet.
While the technical development of the Internet was an extensive research topic from the beginning, an increased public awareness of several critical shortcomings in terms of performance, reliability, scalability, security and many other categories including societal, economical and business aspects, led to future Internet research efforts. The time horizon of future Internet studies is typically long term, taking several years before significant deployments take place.
Approaches towards a future Internet range from small, incremental evolutionary steps to complete redesigns (clean slate) and architecture principles, where the applied technologies shall not be limited by existing standards or paradigms such as client server networking, which, for example, might evolve into co-operative peer structures. The fact that an IP address denotes both the identifier as well as the locator of an end system, sometimes referred to as semantic overload, is an example of a conceptual shortcoming of the Internet protocol suite architecture. Approaches called "clean slate" are based on experience that supplementary or late additions to an original and established design are limited in their acceptance and introduction. Technical examples for evolutionary approaches include supplements to existing Internet technology, such as differentiated services, reliable server pooling, SCTP, Locator/Identifier Separation Protocol, Site Multihoming by IPv6 Intermediation or Internet Protocol version 6.
Non-technical aspects of a future Internet span large areas such as socio-economics, [1] business and environmental issues. The Organisation for Economic Co-operation and Development held a conference called "Shaping Policies for a Digital World" in 2008. It proposed activities such as publishing recommendations for the future of the Internet economy. [2]
Research areas that could be seen as components of a future Internet include network management, [3] [4] [5] [6] network virtualization, and treating any kind of information as objects, independent of their storage or location.
Elements of cloud computing blended into the notion of future Internet, leading to the concept of cloud networking.
While future Internet is often associated with the Global Environment for Network Innovations initiatives of the US National Science Foundation (NSF), other international research programmes have adopted this term. A 100x100 Clean Slate project ran from about 2003 through 2005. Its name comes from estimating 100 Mbit/s connectivity to about 100 million homes in the US. [7] Another "clean slate" project hosted at Stanford University, ran from 2007 to 2012, including faculty such as Nick McKeown, David Cheriton and Dan Boneh. [8] [9] [10]
The AKARI Project is Japan's "architecture design project for new generation network" with implementation expected in 2015. [11]
Future Internet Research and Experimentation is a research program funded by the European Union to foster research on the future developments of Internet technology and services. Two meetings were held in 2007. [12] Some projects were funded in 2008, and more in 2011. [13]
Future Internet testbeds experimentation between BRazil and Europe (FIBRE) is a research project co-funded by the Brazilian Council for Scientific and Technological Development (CNPq) and the European Commission under the FP7 Cooperation Programme. The main objective of the project is the design, implementation and validation of a shared future Internet research facility. [14] Also in Brazil, there is the NovaGenesis project, which started in 2008 and aims at integrating information- and service-centric approaches with mobile-friendly, software-defined, and name-based self-organization.
The EC Future Internet Architecture (FIArch) Experts Reference Group (ERG) wrote a paper about design principles and proposed seeds for new principles. [15]
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency of the United States Department of Defense.
Leonard Kleinrock is an American computer scientist and Internet pioneer. He is Distinguished Professor Emeritus of Computer Science at UCLA's Henry Samueli School of Engineering and Applied Science. Kleinrock made several important contributions to the field of computer science, in particular to the mathematical foundations of data communication in computer networking. He has received numerous prestigious awards.
Larry Roberts was an American computer scientist and Internet pioneer.
Vehicular ad hoc networks (VANETs) are created by applying the principles of mobile ad hoc networks (MANETs) – the spontaneous creation of a wireless network of mobile devices – to the domain of vehicles. VANETs were first mentioned and introduced in 2001 under "car-to-car ad-hoc mobile communication and networking" applications, where networks can be formed and information can be relayed among cars. It was shown that vehicle-to-vehicle and vehicle-to-roadside communications architectures will co-exist in VANETs to provide road safety, navigation, and other roadside services. VANETs are a key part of the intelligent transportation systems (ITS) framework. Sometimes, VANETs are referred as Intelligent Transportation Networks. They are understood as having evolved into a broader "Internet of vehicles". which itself is expected to ultimately evolve into an "Internet of autonomous vehicles".
Nicholas (Nick) William McKeown FREng, is a Senior Fellow at Intel, a professor in the Electrical Engineering and Computer Science departments at Stanford University, and a visiting professor at Oxford University. He has also started technology companies in Silicon Valley.
The Clean Slate Program was an interdisciplinary research program at Stanford University which considered how the Internet could be redesigned with a "clean slate", without the accumulated complexity of existing systems but using the experience gained in their decades of development. Its program director was Nick McKeown.
Future Internet Research and Experimentation (FIRE) is a program funded by the European Union to do research on the Internet, its prospects, and its future, a field known as "future Internet".
Douglas Earl Comer is a professor of computer science at Purdue University, where he teaches courses on operating systems and computer networks. He has written numerous research papers and textbooks, and currently heads several networking research projects. He has been involved in TCP/IP and internetworking since the late 1970s, and is an internationally recognized authority. He designed and implemented X25NET and Cypress networks, and the Xinu operating system. He is director of the Internetworking Research Group at Purdue, editor of Software - Practice and Experience, and a former member of the Internet Architecture Board. Comer completed the original version of Xinu in 1979. Since then, Xinu has been expanded and ported to a wide variety of platforms, including: IBM PC, Macintosh, Digital Equipment Corporation VAX and DECstation 3100, Sun Microsystems Sun-2, Sun-3 and SPARCstations, and Intel Pentium. It has been used as the basis for many research projects. Furthermore, Xinu has been used as an embedded system in products by companies such as Motorola, Mitsubishi, Hewlett-Packard, and Lexmark.
Named Data Networking (NDN) is a proposed Future Internet architecture that seeks to address problems in contemporary internet architectures like IP. NDN has its roots in an earlier project, Content-Centric Networking (CCN), which Van Jacobson first publicly presented in 2006. The NDN project is investigating Jacobson's proposed evolution from today's host-centric network architecture IP to a data-centric network architecture (NDN). The stated goal of this project is that with a conceptually simple shift, far-reaching implications for how people design, develop, deploy, and use networks and applications could be realized.
The AKARI Architecture Design Project was a project for designing a new generation computer network architecture supported by the National Institute of Information and Communications Technology (NICT) of Japan. The name "AKARI" indicates "A small light in the dark pointing to the future" and it comes from the Japanese word Akari, which means "a small light". Launched in May 2006, the AKARI Project investigated technologies for new generation network by 2015, developing a network architecture and creating a network design based on that architecture. AKARI is also denoted as a Future Internet project.
Software-defined networking (SDN) is an approach to network management that enables dynamic and programmatically efficient network configuration to improve network performance and monitoring in a manner more akin to cloud computing than to traditional network management. SDN is meant to improve the static architecture of traditional networks and may be employed to centralize network intelligence in one network component by disassociating the forwarding process of network packets from the routing process. The control plane consists of one or more controllers, which are considered the brains of the SDN network, where the whole intelligence is incorporated. However, centralization has certain drawbacks related to security, scalability and elasticity.
Future Internet testbeds / experimentation between BRazil and Europe (FIBRE) is a research project co-funded by the Conselho Nacional de Desenvolvimento Científico e Tecnológico of Brazil and the European Commission under the seventh of the Framework Programmes for Research and Technological Development (FP7).
Many universities, vendors, institutes and government organizations are investing in cloud computing research:
The Recursive InterNetwork Architecture (RINA) is a new computer network architecture proposed as an alternative to the architecture of the currently mainstream Internet protocol suite. The principles behind RINA were first presented by John Day in his 2008 book Patterns in Network Architecture: A return to Fundamentals. This work is a start afresh, taking into account lessons learned in the 35 years of TCP/IP’s existence, as well as the lessons of OSI’s failure and the lessons of other network technologies of the past few decades, such as CYCLADES, DECnet, and Xerox Network Systems. RINA's fundamental principles are that computer networking is just Inter-Process Communication or IPC, and that layering should be done based on scope/scale, with a single recurring set of protocols, rather than based on function, with specialized protocols. The protocol instances in one layer interface with the protocol instances on higher and lower layers via new concepts and entities that effectively reify networking functions currently specific to protocols like BGP, OSPF and ARP. In this way, RINA claims to support features like mobility, multihoming and quality of service without the need for additional specialized protocols like RTP and UDP, as well as to allow simplified network administration without the need for concepts like autonomous systems and NAT.
Martín Casado is a Spanish-born American software engineer, entrepreneur, and investor. He is a general partner at Andreessen Horowitz, was a pioneer of software-defined networking, and was a co-founder and the chief technology officer of Nicira Networks.
Dipankar Raychaudhuri (also known as Ray) was born on January 16, 1955. He is the Director of the Wireless Information Network Laboratory (WINLAB) and a distinguished professor in the Department of Electrical and Computer Engineering at Rutgers University.
The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.