Internet |
---|
Internetportal |
The history of the Internet has its origin in the efforts of scientists and engineers to build and interconnect computer networks. The Internet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France. [1] [2] [3]
Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users, and later, the possibility of achieving this over wide area networks. J. C. R. Licklider developed the idea of a universal network at the Information Processing Techniques Office (IPTO) of the United States Department of Defense (DoD) Advanced Research Projects Agency (ARPA). Independently, Paul Baran at the RAND Corporation proposed a distributed network based on data in message blocks in the early 1960s, and Donald Davies conceived of packet switching in 1965 at the National Physical Laboratory (NPL), proposing a national commercial data network in the United Kingdom.
ARPA awarded contracts in 1969 for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. ARPANET adopted the packet switching technology proposed by Davies and Baran. The network of Interface Message Processors (IMPs) was built by a team at Bolt, Beranek, and Newman, with the design and specification led by Bob Kahn. The host-to-host protocol was specified by a group of graduate students at UCLA, led by Steve Crocker, along with Jon Postel and others. The ARPANET expanded rapidly across the United States with connections to the United Kingdom and Norway.
Several early packet-switched networks emerged in the 1970s which researched and provided data networking. Louis Pouzin and Hubert Zimmermann pioneered a simplified end-to-end approach to internetworking at the IRIA. Peter Kirstein put internetworking into practice at University College London in 1973. Bob Metcalfe developed the theory behind Ethernet and the PARC Universal Packet. ARPA initiatives and the International Network Working Group developed and refined ideas for internetworking, in which multiple separate networks could be joined into a network of networks. Vint Cerf, now at Stanford University, and Bob Kahn, now at DARPA, published their research on internetworking in 1974. Through the Internet Experiment Note series and later RFCs this evolved into the Transmission Control Protocol (TCP) and Internet Protocol (IP), two protocols of the Internet protocol suite. The design included concepts pioneered in the French CYCLADES project directed by Louis Pouzin. The development of packet switching networks was underpinned by mathematical work in the 1970s by Leonard Kleinrock at UCLA.
In the late 1970s, national and international public data networks emerged based on the X.25 protocol, designed by Rémi Després and others. In the United States, the National Science Foundation (NSF) funded national supercomputing centers at several universities in the United States, and provided interconnectivity in 1986 with the NSFNET project, thus creating network access to these supercomputer sites for research and academic organizations in the United States. International connections to NSFNET, the emergence of architecture such as the Domain Name System, and the adoption of TCP/IP on existing networks in the United States and around the world marked the beginnings of the Internet. [4] [5] [6] Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia. [7] Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990. [8] The optical backbone of the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic, as traffic transitioned to optical networks managed by Sprint, MCI and AT&T in the United States.
Research at CERN in Switzerland by the British computer scientist Tim Berners-Lee in 1989–90 resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. [9] The dramatic expansion of the capacity of the Internet, enabled by the advent of wave division multiplexing (WDM) and the rollout of fiber optic cables in the mid-1990s, had a revolutionary impact on culture, commerce, and technology. This made possible the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, video chat, and the World Wide Web with its discussion forums, blogs, social networking services, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber-optic networks operating at 1 Gbit/s, 10 Gbit/s, and 800 Gbit/s by 2019. [10] The Internet's takeover of the global communication landscape was rapid in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, 51% by 2000, and more than 97% of the telecommunicated information by 2007. [11] The Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking services. However, the future of the global network may be shaped by regional differences. [12]
J. C. R. Licklider, while working at BBN, proposed a computer network in his March 1960 paper Man-Computer Symbiosis : [18]
A network of such centers, connected to one another by wide-band communication lines [...] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper
In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication" [19] which was one of the first descriptions of a networked future.
In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within ARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network". [20]
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years. [21]
The infrastructure for telephone systems at the time was based on circuit switching, which requires pre-allocation of a dedicated communication line for the duration of the call. Telegram services had developed store and forward telecommunication techniques. Western Union's Automatic Telegraph Switching System Plan 55-A was based on message switching. The U.S. military's AUTODIN network became operational in 1962. These systems, like SAGE and SBRE, still required rigid routing structures that were prone to single point of failure. [24]
The technology was considered vulnerable for strategic and military use because there were no alternative paths for the communication in case of a broken link. In the early 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. [25] Information would be transmitted across a "distributed" network, divided into what he called "message blocks". [26] [27] [28] [29] [30]
In addition to being prone to a single point of failure, existing telegraphic techniques were inefficient and inflexible. Beginning in 1965 Donald Davies, at the National Physical Laboratory in the United Kingdom, developed a more advanced proposal of the concept, designed for high-speed computer networking, which he called packet switching, the term that would ultimately be adopted. [31] [32] [33] [34]
Packet switching is a technique for transmitting computer data by splitting it into very short, standardized chunks, attaching routing information to each of these chunks, and transmitting them independently through a computer network. It provides better bandwidth utilization than traditional circuit-switching used for telephony, and enables the connection of computers with different transmission and receive rates. It is a distinct concept to message switching. [35]
Following discussions with J. C. R. Licklider in 1965, Donald Davies became interested in data communications for computer networks. [36] [37] Later that year, at the National Physical Laboratory (NPL) in the United Kingdom, Davies designed and proposed a national commercial data network based on packet switching. [38] The following year, he described the use of "switching nodes" to act as routers in a digital communication network. [39] [40] The proposal was not taken up nationally but he produced a design for a local network to serve the needs of the NPL and prove the feasibility of packet switching using high-speed data transmission. [41] [42] To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", [43] thus inventing what came to be known as the end-to-end principle. In 1967, he and his team were the first to use the term 'protocol' in a modern data-commutation context. [44]
In 1968, [45] Davies began building the Mark I packet-switched network to meet the needs of his multidisciplinary laboratory and prove the technology under operational conditions. [46] [47] The network's development was described at a 1968 conference. [48] [49] Elements of the network became operational in early 1969, [46] [50] the first implementation of packet switching, [51] [52] and the NPL network was the first to use high-speed links. [53] Many other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design. [36] The Mark II version which operated from 1973 used a layered protocol architecture. [53] In 1976, 12 computers and 75 terminal devices were attached, [54] and more were added. The NPL team carried out simulation work on wide-area packet networks, including datagrams and congestion; and research into internetworking and secure communications. [46] [55] [56] The network was replaced in 1986. [53]
Robert Taylor was promoted to the head of the Information Processing Techniques Office (IPTO) at Defense Advanced Research Projects Agency (DARPA) in 1966. He intended to realize Licklider's ideas of an interconnected networking system. [57] As part of the IPTO's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time-Sharing System project at Massachusetts Institute of Technology (MIT). [58] Taylor's identified need for networking became obvious from the waste of resources apparent to him.
For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.... I said, oh man, it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet. [58]
Bringing in Larry Roberts from MIT in January 1967, he initiated a project to build such a network. Roberts and Thomas Merrill had been researching computer time-sharing over wide area networks (WANs). [59] Wide area networks emerged during the late 1950s and became established during the 1960s. At the first ACM Symposium on Operating Systems Principles in October 1967, Roberts presented a proposal for the "ARPA net", based on Wesley Clark's idea to use Interface Message Processors (IMP) to create a message switching network. [60] [61] [62] At the conference, Roger Scantlebury presented Donald Davies' work on a hierarchical digital communications network using packet switching and referenced the work of Paul Baran at RAND. Roberts incorporated the packet switching and routing concepts of Davies and Baran into the ARPANET design and upgraded the proposed communications speed from 2.4 kbit/s to 50 kbit/s. [22] [63] [64] [65]
ARPA awarded the contract to build the network to Bolt Beranek & Newman. The "IMP guys", led by Frank Heart and Bob Kahn, developed the routing, flow control, software design and network control. [36] [66] The first ARPANET link was established between the Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, and the NLS system at Stanford Research Institute (SRI) directed by Douglas Engelbart in Menlo Park, California at 22:30 hours on October 29, 1969. [67]
"We set up a telephone connection between us and the guys at SRI ...", Kleinrock ... said in an interview: "We typed the L and we asked on the phone,
- "Do you see the L?"
- "Yes, we see the L," came the response.
- We typed the O, and we asked, "Do you see the O."
- "Yes, we see the O."
- Then we typed the G, and the system crashed ...
By December 1969, a four-node network was connected by adding the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara followed by the University of Utah Graphics Department. [70] In the same year, Taylor helped fund ALOHAnet, a system designed by professor Norman Abramson and others at the University of Hawaiʻi at Mānoa that transmitted data by radio between seven computers on four islands on Hawaii. [71]
Steve Crocker formed the "Network Working Group" in 1969 at UCLA. Working with Jon Postel and others, [72] he initiated and managed the Request for Comments (RFC) process, which is still used today for proposing and distributing contributions. RFC 1, entitled "Host Software", was written by Steve Crocker and published on April 7, 1969. The protocol for establishing links between network sites in the ARPANET, the Network Control Program (NCP), was completed in 1970. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
Roberts presented the idea of packet switching to the communication professionals, and faced anger and hostility. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran faced the same rejection and thus failed to convince the military into constructing a packet switching network. [73] [74]
Early international collaborations via the ARPANET were sparse. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR), [75] via a satellite link at the Tanum Earth Station in Sweden, and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first international heterogenous resource sharing network. [76] Throughout the 1970s, Leonard Kleinrock developed the mathematical theory to model and measure the performance of packet-switching technology, building on his earlier work on the application of queueing theory to message switching systems. [77] By 1981, the number of hosts had grown to 213. [78] The ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used.
The Merit Network [79] was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development. [80] With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. [81] In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. [81] [82] All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. [83] In 1972, he began planning the network to explore alternatives to the early ARPANET design and to support internetworking research. First demonstrated in 1973, it was the first network to implement the end-to-end principle conceived by Donald Davies and make the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams. Concepts implemented in this network influenced TCP/IP architecture. [84] [85] [83]
Based on international research initiatives, particularly the contributions of Rémi Després, packet switching network standards were developed by the International Telegraph and Telephone Consultative Committee (ITU-T) in the form of X.25 and related standards. [86] [87] X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET, the United Kingdom's high-speed national research and education network (NREN). The initial ITU Standard on X.25 was approved in March 1976. [88] Existing networks, such as Telenet in the United States adopted X.25 as well as new public data networks, such as DATAPAC in Canada and TRANSPAC in France. [86] [87] X.25 was supplemented by the X.75 protocol which enabled internetworking between national PTT networks in Europe and commercial networks in North America. [89] [90] [91]
The British Post Office, Western Union International, and Tymnet collaborated to create the first international packet-switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure. [92]
Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.
The first public dial-in networks used asynchronous teleprinter (TTY) terminal protocols to reach a concentrator operated in the public network. Some networks, such as Telenet and CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. [93] Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.[ citation needed ]
In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and BITNET. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. [94]
Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network evolved into one of the first examples of Internet technology coming into use through popular diffusion.
With so many different networking methods seeking interconnection, a method was needed to unify them. Louis Pouzin initiated the CYCLADES project in 1972, [95] building on the work of Donald Davies and the ARPANET. [96] An International Network Working Group formed in 1972; active members included Vint Cerf from Stanford University, Alex McKenzie from BBN, Donald Davies and Roger Scantlebury from NPL, and Louis Pouzin and Hubert Zimmermann from IRIA. [97] [98] [99] Pouzin coined the term catenet for concatenated network. Bob Metcalfe at Xerox PARC outlined the idea of Ethernet and PARC Universal Packet (PUP) for internetworking. Bob Kahn, now at DARPA, recruited Vint Cerf to work with him on the problem. By 1973, these groups had worked out a fundamental reformulation, in which the differences between network protocols were hidden by using a common internetworking protocol. Instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. [2] [100]
Cerf and Kahn published their ideas in May 1974, [101] which incorporated concepts implemented by Louis Pouzin and Hubert Zimmermann in the CYCLADES network. [102] The specification of the resulting protocol, the Transmission Control Program, was published as RFC 675 by the Network Working Group in December 1974. [103] It contains the first attested use of the term internet, as a shorthand for internetwork. This software was monolithic in design using two simplex communication channels for each user session.
With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other networks independently from their detailed characteristics, thereby solving the fundamental problems of internetworking. DARPA agreed to fund the development of prototype software. Testing began in 1975 through concurrent implementations at Stanford, BBN and University College London (UCL). [3] After several years of work, the first demonstration of a gateway between the Packet Radio network (PRNET) in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977, a three network demonstration was conducted including the ARPANET, the SRI's Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite Network (SATNET) including a node at UCL. [104] [105]
The software was redesigned as a modular protocol stack, using full-duplex channels; between 1976 and 1977, Yogen Dalal and Robert Metcalfe among others, proposed separating TCP's routing and transmission control functions into two discrete layers, [106] [107] which led to the splitting of the Transmission Control Program into the Transmission Control Protocol (TCP) and the Internet Protocol (IP) in version 3 in 1978. [107] [108] Version 4 was described in IETF publication RFC 791 (September 1981), 792 and 793. It was installed on SATNET in 1982 and the ARPANET in January 1983 after the DoD made it standard for all military computer networking. [109] [110] This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model or DARPA model. [111] Cerf credits his graduate students Yogen Dalal, Carl Sunshine, Judy Estrin, Richard Karp, and Gérard Le Lann with important work on the design and testing. [112] DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems.
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. In July 1975, the network was turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. [113] This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were. Data transmission speeds depended upon the type of connection, the slowest being analog telephone lines and the fastest using optical networking technology.
Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.
In 1981, NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. CSNET played a central role in popularizing the Internet outside the ARPANET. [23]
In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks. [114] The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990.
NSFNET was expanded and upgraded to dedicated fiber, optical lasers and optical amplifier systems capable of delivering T3 start up speeds or 45 Mbit/s in 1991. However, the T3 transition by MCI took longer than expected, allowing Sprint to establish a coast-to-coast long-distance commercial Internet service. When NSFNET was decommissioned in 1995, its optical networking backbones were handed off to several commercial Internet service providers, including MCI, PSI Net and Sprint. [115] As a result, when the handoff was complete, Sprint and its Washington DC Network Access Points began to carry Internet traffic, and by 1996, Sprint was the world's largest carrier of Internet traffic. [116]
The research and academic community continues to develop and use advanced networks such as Internet2 in the United States and JANET in the United Kingdom.
The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675: [117] Internet Transmission Control Program, December 1974) as a short form of internetworking, when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formed NSFNET project in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network. [118]
Opening the Internet and the fiber optic backbone to corporate and consumers increased demand for network capacity. The expense and delay of laying new fiber led providers to test a fiber bandwidth expansion alternative that had been pioneered in the late 1970s by Optelecom using "interactions between light and matter, such as lasers and optical devices used for optical amplification and wave mixing". [119] This technology became known as wave division multiplexing (WDM). Bell Labs deployed a 4-channel WDM system in 1995. [120] To develop a mass capacity (dense) WDM system, Optelecom and its former head of Light Systems Research, David R. Huber formed a new venture, Ciena Corp., that deployed the world's first dense WDM system on the Sprint fiber network in June 1996. [120] This was referred to as the real start of optical networking. [121]
As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the Internet technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as the International Packet Switched Service (IPSS) X.25 network, to carry Internet traffic.
Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail. [122]
Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables. [123]
The MOS transistor underpinned the rapid growth of telecommunication bandwidth over the second half of the 20th century. [124] To address the need for transmission capacity beyond that provided by radio, satellite and analog copper telephone lines, engineers developed optical communications systems based on fiber optic cables powered by lasers and optical amplifier techniques.
The concept of lasing arose from a 1917 paper by Albert Einstein, "On the Quantum Theory of Radiation." Einstein expanded upon a dialog with Max Planck on how atoms absorb and emit light, part of a thought process that, with input from Erwin Schrödinger, Werner Heisenberg and others, gave rise to Quantum Mechanics. Specifically, in his quantum theory, Einstein mathematically determined that light could be generated not only by spontaneous emission, such as the light emitted by an incandescent light or the Sun, but also by stimulated emission.
Forty years later, on November 13, 1957, Columbia University physics student Gordon Gould first realized how to make light by stimulated emission through a process of optical amplification. He coined the term LASER for this technology—Light Amplification by Stimulated Emission of Radiation. [125] Using Gould's light amplification method (patented as "Optically Pumped Laser Amplifier"), [126] Theodore Maiman made the first working laser on May 16, 1960. [127]
Gould co-founded Optelecom, Inc. in 1973 to commercialize his inventions in optical fiber telecommunications. [128] just as Corning Glass was producing the first commercial fiber optic cable in small quantities. Optelecom configured its own fiber lasers and optical amplifiers into the first commercial optical communication systems which it delivered to Chevron and the US Army Missile Defense. [129] Three years later, GTE deployed the first optical telephone system in 1977 in Long Beach, California. [130] By the early 1980s, optical networks powered by lasers, LED and optical amplifier equipment supplied by Bell Labs, NTT and Perelli were used by select universities and long-distance telephone providers.
In 1982, NORSAR/NDRE and Peter Kirstein's research group at University College London (UCL) left the ARPANET and began to use TCP/IP over SATNET. [100] There were 40 British academic research groups using UCL's link to the ARPANET in 1975. [76] [131]
Between 1984 and 1988, CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs, and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989, when a transatlantic connection to Cornell University was established. [132] [133] [134]
The Computer Science Network (CSNET) began operation in 1981 to provide networking connections to institutions that could not connect directly to ARPANET. Its first international connection was to Israel in 1984. Soon after, connections were established to computer science departments in Canada, France, and Germany. [23]
In 1988, the first international connections to NSFNET was established by France's INRIA, [135] [136] and Piet Beertema at the Centrum Wiskunde & Informatica (CWI) in the Netherlands. [137] Daniel Karrenberg, from CWI, visited Ben Segal, CERN's TCP/IP coordinator, looking for advice about the transition of EUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. The NORDUnet connection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden. [138] In January 1989, CERN opened its first external TCP/IP connections. [139] This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.
The United Kingdom's national research and education network (NREN), JANET, began operation in 1984 using the UK's Coloured Book protocols and connected to NSFNET in 1989. In 1991, JANET adopted Internet Protocol on the existing network. [140] [141] The same year, Dai Davies introduced Internet technology into the pan-European NREN, EuropaNet, which was built on the X.25 protocol. [142] [143] The European Academic and Research Network (EARN) and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992. [132]
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks. [98] [144] [145]
South Korea set up a two-node domestic TCP/IP network in 1982, the System Development Network (SDN), adding a third node the following year. SDN was connected to the rest of the world in August 1983 using UUCP (Unix-to-Unix-Copy); connected to CSNET in December 1984; [23] and formally connected to the NSFNET in 1990. [146] [147] [148]
Japan, which had built the UUCP-based network JUNET in 1984, connected to CSNET, [23] and later to NSFNET in 1989, marking the spread of the Internet to Asia.
In Australia, ad hoc networking to ARPA and in-between Australian universities formed in the late 1980s, based on various technologies such as X.25, UUCPNet, and via a CSNET. [23] These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.
New Zealand adopted the UK's Coloured Book protocols as an interim standard and established its first international IP connection to the U.S. in 1989. [149]
While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they built organizations for Internet resource administration and to share operational experience, which enabled more transmission facilities to be put into place.
At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.
In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998.
Africa is building an Internet infrastructure. AFRINIC, headquartered in Mauritius, manages IP address allocation for the continent. As with other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists. [153]
There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts. [154]
The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT). [155]
In South Korea, VDSL, a last mile technology developed in the 1990s by NextLevel Communications, connected corporate and consumer copper-based telephone lines to the Internet. [156]
The People's Republic of China established its first TCP/IP college network, Tsinghua University's TUNET in 1991. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter. [157]
Japan hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992. [158]
As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.
Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections.
As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. In 1989, MCI Mail became the first commercial email provider to get an experimental gateway to the Internet. [160] The first commercial dialup ISP in the United States was The World, which opened in 1989. [161]
In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks. [162] [163] This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations. [164]
By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995, when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service. [165] [166] NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States. [167]
An event held on 11 January 1994, The Superhighway Summit at UCLA's Royce Hall, was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications". [168]
The invention of the World Wide Web by Tim Berners-Lee at CERN, as an application on the Internet, [169] brought many social and commercial uses to what was, at the time, a network of networks for academic and research institutions. [170] [171] The Web opened to the public in 1991 and began to enter general use in 1993–4, when websites for everyday use started to become available. [172]
During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period, mobile cellular devices ("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide. Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly from analog tape to digital optical discs (DVD and to an extent still, floppy disc to CD). Enabling technologies used from the early 2000s such as PHP, modern JavaScript and Java, technologies such as AJAX, HTML 4 (and its emphasis on CSS), and various software frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption.
The Internet was widely used for mailing lists, emails, creating and distributing maps with tools like MapQuest, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards, the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure.
Typical design elements of these "Web 1.0" era websites included: [173] Static pages instead of dynamic HTML; [174] content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2-era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items; [175] and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program. [176]
During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which "dot-com" companies (referring to the ".com" top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.
The history of the World Wide Web up to around 2004 was retrospectively named and described by some as "Web 1.0". [177]
In the final stage of IPv4 address exhaustion, the last IPv4 address block was assigned in January 2011 at the level of the regional Internet registries. [178] IPv4 uses 32-bit addresses which limits the address space to 232 addresses, i.e. 4294967296 addresses. [108] IPv4 is in the process of replacement by IPv6, its successor, which uses 128-bit addresses, providing 2128 addresses, i.e. 340282366920938463463374607431768211456, [179] a vastly increased address space. The shift to IPv6 is expected to take a long time to complete. [178]
The rapid technical advances that would propel the Internet into its place as a social system, which has completely transformed the way humans interact with each other, took place during a relatively short period from around 2005 to 2010, coinciding with the point in time in which IoT devices surpassed the number of humans alive at some point in the late 2000s. They included:
The term "Web 2.0" describes websites that emphasize user-generated content (including user-to-user interaction), usability, and interoperability. It first appeared in a January 1999 article called "Fragmented Future" written by Darcy DiNucci, a consultant on electronic information design, where she wrote: [180] [181] [182] [183]
The term resurfaced during 2002–2004, [184] [185] [186] [187] and gained prominence in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you". [188] They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value.
Web 2.0 does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. Web 2.0 describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking services, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups. [189] Terry Flew, in his 3rd Edition of New Media described what he believed to characterize the differences between Web 1.0 and Web 2.0:
This era saw several household names gain prominence through their community-oriented operation – YouTube, Twitter, Facebook, Reddit and Wikipedia being some examples.
Telephone systems have been slowly adopting Voice over IP since 2003. Early experiments proved that voice can be converted to digital packets and sent over the Internet. The packets are collected and converted back to analog voice. [191] [192] [193]
The process of change that generally coincided with "Web 2.0" was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.[ citation needed ]
Location-based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.website.com") became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" emerged (short for "Application program" or "Program") as did the "App store".
This "mobile revolution" has allowed for people to have a nearly unlimited amount of information at all times. With the ability to access the internet from cell phones came a change in the way media was consumed. Media consumption statistics show that over half of media consumption between those aged 18 and 34 were using a smartphone. [194]
The first Internet link into low Earth orbit was established on January 22, 2010, when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. [195] (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment. [196]
Communication with spacecraft beyond Earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008. [197] Testing of DTN-based communications between the International Space Station and Earth (now termed Disruption-Tolerant Networking) has been ongoing since March 2009, and was scheduled to continue until March 2014. [198]
This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-Earth downlinks. According to a February 2011 statement by Google's Vint Cerf, the so-called "Bundle protocols" have been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds. [199]
As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by the Internet Engineering Task Force (IETF). [200] However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, the Internet Assigned Numbers Authority (IANA) oversees the allocation and assignment of various technical identifiers. [201] In addition, the Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System.
The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI's Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his death in 1998. [202]
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI's Paul Mockapetris in 1983. [203] The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. [201] In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc. [204] [205]
The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366, [206] which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region. The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group. [207]
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics. [208]
Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers. [207] Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry. [209]
In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority. [210] The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure. [211] ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.
The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF).
The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into Working Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet. [212] [213]
The IETF grew out of quarterly meetings with U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, the Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States. [212]
The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG) [214] and the Internet Architecture Board (IAB). [215] The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer-term research issues. [212] [216]
RFCs are the main documentation for the work of the IAB, IESG, IETF, and IRTF. [217] Originally intended as requests for comments, RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969. These technical memos documented aspects of ARPANET development. They were edited by Jon Postel, the first RFC Editor. [212] [218]
RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics. [219] RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original. [212] [218]
The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, US, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world. [220]
ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision-making. [221]
Since the 1990s, the Internet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, [222] and in September 2009 gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued. [223] [224] [225] Finally, on October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community. [226]
The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issues Request for Comments.
In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter. [227] Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues. [228] [229]
Tim Berners-Lee, inventor of the web, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched the World Wide Web Foundation (WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all. [230] [231] In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch the Contract for the Web , a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good). [232]
Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.
Examples include political activities such as public protest and canvassing of support and votes, but also:
The examples and perspective in this section may not represent a worldwide view of the subject.(April 2015) |
On April 23, 2014, the Federal Communications Commission (FCC) was reported to be considering a new rule that would permit Internet service providers to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. [233] [234] [235] A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. [236] On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. [237] [238] On November 10, 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. [239] [240] [241] On January 16, 2015, Republicans presented legislation, in the form of a U.S. Congress HR discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers (ISPs). [242] [243] On January 31, 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the internet in a vote expected on February 26, 2015. [244] [245] [246] [247] [248] Adoption of this notion would reclassify internet service from one of information to one of telecommunications [249] and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. [250] [251] The FCC is expected to enforce net neutrality in its vote, according to The New York Times . [252] [253]
On February 26, 2015, the FCC ruled in favor of net neutrality by applying Title II (common carrier) of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the Internet. [254] [255] [256] The FCC chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." [257]
On March 12, 2015, the FCC released the specific details of the net neutrality rules. [258] [259] [260] On April 13, 2015, the FCC published the final rule on its new "Net Neutrality" regulations. [261] [262]
On December 14, 2017, the FCC repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules. [263]
Email has often been called the killer application of the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is undocumented, among the first systems to have such a facility were the System Development Corporation (SDC) Q32 and the Compatible Time-Sharing System (CTSS) at MIT. [264]
The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation. [265] In 1971 Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the @ sign to separate mailbox names from host names. [266]
A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET email system. Email could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.
Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today. [267] A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines.
In 1999, Napster became the first peer-to-peer file sharing system. [268] Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003. [269]
All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses. [270] And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts. [271] [272] The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders. [273] File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other. [274] [275]
File hosting allowed for people to expand their computer's hard drives and "host" their files on a server. Most file hosting services offer free storage, as well as larger storage amount for a fee. These services have greatly expanded the internet for business and personal use.
Google Drive, launched on April 24, 2012, has become the most popular file hosting service. Google Drive allows users to store, edit, and share files with themselves and other users. Not only does this application allow for file editing, hosting, and sharing. It also acts as Google's own free-to-access office programs, such as Google Docs, Google Slides, and Google Sheets. This application served as a useful tool for University professors and students, as well as those who are in need of Cloud storage. [276] [277]
Dropbox, released in June 2007 is a similar file hosting service that allows users to keep all of their files in a folder on their computer, which is synced with Dropbox's servers. This differs from Google Drive as it is not web-browser based. Now, Dropbox works to keep workers and files in sync and efficient. [278]
Mega, having over 200 million users, is an encrypted storage and communication system that offers users free and paid storage, with an emphasis on privacy. [279] Being three of the largest file hosting services, Google Drive, Dropbox, and Mega all represent the core ideas and values of these services.
The earliest form of online piracy began with a P2P (peer to peer) music sharing service named Napster, launched in 1999. Sites like LimeWire, The Pirate Bay, and BitTorrent allowed for anyone to engage in online piracy, sending ripples through the media industry. With online piracy came a change in the media industry as a whole. [280]
Total global mobile data traffic reached 588 exabytes during 2020, [281] a 150-fold increase from 3.86 exabytes/year in 2010. [282] Most recently, smartphones accounted for 95% of this mobile data traffic with video accounting for 66% by type of data. [281] Mobile traffic travels by radio frequency to the closest cell phone tower and its base station where the radio signal is converted into an optical signal that is transmitted over high-capacity optical networking systems that convey the information to data centers. The optical backbones enable much of this traffic as well as a host of emerging mobile services including the Internet of things, 3-D virtual reality, gaming and autonomous vehicles. The most popular mobile phone application is texting, of which 2.1 trillion messages were logged in 2020. [283] The texting phenomenon began on December 3, 1992, when Neil Papworth sent the first text message of "Merry Christmas" over a commercial cell phone network to the CEO of Vodafone. [284]
The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (now BlackBerry Limited) for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. [285] Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries. [286] The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user. [287]
Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020 to 2021 [288] when the number of active Internet users reached 4.66 billion people, representing half of the global population. Further demand for data, and the capacity to satisfy this demand, are forecast to increase to 717 terabits per second in 2021. [289] This capacity stems from the optical amplification and WDM systems that are the common basis of virtually every metro, regional, national, international and submarine telecommunications networks. [290] These optical networking systems have been installed throughout the 5 billion kilometers of fiber optic lines deployed around the world. [291] Continued growth in traffic is expected for the foreseeable future from a combination of new users, increased mobile phone adoption, machine-to-machine connections, connected homes, 5G devices and the burgeoning requirement for cloud and Internet services such as Amazon, Facebook, Apple Music and YouTube.
There are nearly insurmountable problems in supplying a historiography of the Internet's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research. [292] A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote:
"The Arpanet period is somewhat well documented because the corporation in charge – BBN – left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. ... So much of what happened was done verbally and on the basis of individual trust."
Notable works on the subject were published by Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins Of The Internet (1996), Roy Rosenzweig, Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet (1998), and Janet Abbate, Inventing the Internet (2000). [294]
Most scholarship and literature on the Internet lists ARPANET as the prior network that was iterated on and studied to create it, [295] although other early computer networks and experiments existed alongside or before ARPANET. [296]
These histories of the Internet have since been characterized as teleologies or Whig history; that is, they take the present to be the end point toward which history has been unfolding based on a single cause:
In the case of Internet history, the epoch-making event is usually said to be the demonstration of the 4-node ARPANET network in 1969. From that single happening the global Internet developed.
In addition to these characteristics, historians have cited methodological problems arising in their work:
"Internet history" ... tends to be too close to its sources. Many Internet pioneers are alive, active, and eager to shape the histories that describe their accomplishments. Many museums and historians are equally eager to interview the pioneers and to publicize their stories.
— Andrew L. Russell (2012) [298]
The Internet is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.
Internetworking is the practice of interconnecting multiple computer networks, such that any pair of hosts in the connected networks can exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks is called an internetwork, or simply an internet.
The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The delivery, arrival time, and order of arrival of datagrams need not be guaranteed by the network.
In telecommunications, packet switching is a method of grouping data into short messages in fixed format, i.e. packets, that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.
Voice over Internet Protocol (VoIP), also called IP telephony, is a method and group of technologies for voice calls for the delivery of voice communication sessions over Internet Protocol (IP) networks, such as the Internet.
An Internet service provider (ISP) is an organization that provides myriad services related to accessing, using, managing, or participating in the Internet. ISPs can be organized in various forms, such as commercial, community-owned, non-profit, or otherwise privately owned.
UUCP is a suite of computer programs and protocols allowing remote execution of commands and transfer of files, email and netnews between computers.
The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency of the United States Department of Defense.
Bob Kahn is an American electrical engineer who, along with Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet.
This article lists communication protocols that are designed for file transfer over a telecommunications network.
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
A network host is a computer or other device connected to a computer network. A host may work as a server offering information resources, services, and applications to users or other hosts on the network. Hosts are assigned at least one network address.
In computer networking, a port or port number is a number assigned to uniquely identify a connection endpoint and to direct data to a specific service. At the software level, within an operating system, a port is a logical construct that identifies a specific process or a type of network service. A port at the software level is identified for each transport protocol and address combination by the port number assigned to it. The most common transport protocols that use port numbers are the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP); those port numbers are 16-bit unsigned numbers.
The following outline is provided as an overview of and topical guide to the Internet.
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.
The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.
But the ARPANET itself had now become an island, with no links to the other networks that had sprung up. By the early 1970s, researchers in France, the UK, and the U.S. began developing ways of connecting networks to each other, a process known as internetworking.
We began doing concurrent implementations at Stanford, BBN, and University College London. So effort at developing the Internet protocols was international from the beginning.
many of the milestones that led to the development of the modern Internet are already familiar to many of us: the genesis of the ARPANET, the implementation of the standard network protocol TCP/IP, the growth of LANs (Large Area Networks), the invention of DNS (the Domain Name System), and the adoption of American legislation that funded U.S. Internet expansion—which helped fuel global network access—to name just a few.
As the network continued to grow, the model of central co-ordination by a contractor funded by the US government became unsustainable. Organisations were using IP-based networking even if they were not directly connected to the ARPAnet. They needed to get globally unique IP addresses. The nature of the ARPAnet was also changing as it was no longer limited to organisations working on ARPA-funded contracts. The US National Science Foundation set up a national IP-based backbone network, NSFnet, so that its grant-holders could be interconnected to supercomputer centres, universities and various national/regional academic/research networks, including ARPAnet. That resulting network of networks was the beginning of today's Internet.
in 1960 'time-sharing' as a phrase was much in the air. It was, however, generally used in my sense rather than in John McCarthy's sense of a CTSS-like object.
As Kahn recalls: ... Paul Baran's contributions ... I also think Paul was motivated almost entirely by voice considerations. If you look at what he wrote, he was talking about switches that were low-cost electronics. The idea of putting powerful computers in these locations hadn't quite occurred to him as being cost effective. So the idea of computer switches was missing. The whole notion of protocols didn't exist at that time. And the idea of computer-to-computer communications was really a secondary concern.
There had been a paper written by [Paul Baran] from the Rand Corporation which, in a sense, foreshadowed packet switching in a way for speech networks and voice networks
Baran had put more emphasis on digital voice communications than on computer communications.
[Scantlebury said] Clearly Donald and Paul Baran had independently come to a similar idea albeit for different purposes. Paul for a survivable voice/telex network, ours for a high-speed computer network.
{{cite web}}
: CS1 maint: multiple names: authors list (link)In his first draft dated Nov. 10, 1965 [5], Davies forecast today's "killer app" for his new communication service: "The greatest traffic could only come if the public used this means for everyday purposes such as shopping... People sending enquiries and placing orders for goods of all kinds will make up a large section of the traffic... Business use of the telephone may be reduced by the growth of the kind of service we contemplate."
Computer developments in the distant future might result in one type of network being able to carry speech and digital messages efficiently.
Then in June 1966, Davies wrote a second internal paper, "Proposal for a Digital Communication Network" In which he coined the word packet,- a small sub part of the message the user wants to send, and also introduced the concept of an "Interface computer" to sit between the user equipment and the packet network.
Its development was described at a 1968 conference, two years before similar progress on ARPANET, the precursor to the Internet, was demonstrated
The system first went 'live' early in 1969
The first packet-switching network was implemented at the National Physical Laboratories in the United Kingdom. It was quickly followed by the ARPANET in 1969.
Leonard Kleinrock: Donald Davies ... did make a single node packet switch before ARPA did
Roberts' proposal that all host computers would connect to one another directly ... was not endorsed ... Wesley Clark ... suggested to Roberts that the network be managed by identical small computers, each attached to a host computer. Accepting the idea, Roberts named the small computers dedicated to network administration 'Interface Message Processors' (IMPs), which later evolved into today's routers.
W. Clark's message switching proposal (appended to Taylor's letter of April 24, 1967 to Engelbart)were reviewed.
Thus the set of IMP's, plus the telephone lines and data sets would constitute a message switching network
He was very conscious of people mistaken belief that the work he did at RAND somehow led to the creation of the ARPAnet. It didn't, and he was very honest about that.
In the early 1970s Mr Pouzin created an innovative data network that linked locations in France, Italy and Britain. Its simplicity and efficiency pointed the way to a network that could connect not just dozens of machines, but millions of them. It captured the imagination of Dr Cerf and Dr Kahn, who included aspects of its design in the protocols that now power the internet.
Two main approaches to internetworking have come into existence based upon the virtual circuit and the datagram services. The vast majority of the work on interconnecting networks falls into one of these two approaches: The CCITT X.75 Recommendation; The DoD Internet Protocol (IP).
{{cite book}}
: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)The authors wish to thank a number of colleagues for helpful comments during early discussions of international network protocols, especially R. Metcalfe, R. Scantlebury, D. Walden, and H. Zimmerman; D. Davies and L. Pouzin who constructively commented on the fragmentation and accounting issues; and S. Crocker who commented on the creation and destruction of associations.
In the early 1970s Mr Pouzin created an innovative data network that linked locations in France, Italy and Britain. Its simplicity and efficiency pointed the way to a network that could connect not just dozens of machines, but millions of them. It captured the imagination of Dr Cerf and Dr Kahn, who included aspects of its design in the protocols that now power the internet.
Despite the misgivings of Xerox Corporation (which intended to make PUP the basis of a proprietary commercial networking product), researchers at Xerox PARC, including ARPANET pioneers Robert Metcalfe and Yogen Dalal, shared the basic contours of their research with colleagues at TCP and Internet working group meetings in 1976 and 1977, suggesting the possible benefits of separating TCPs routing and transmission control functions into two discrete layers.
I first heard the phrase 'Web 2.0' in the name of the Web 2.0 conference in 2004.
{{cite web}}
: CS1 maint: unfit URL (link){{cite web}}
: CS1 maint: unfit URL (link){{cite web}}
: CS1 maint: unfit URL (link){{cite web}}
: CS1 maint: unfit URL (link)ICANN, which oversees the Internet's domain name system, is a private nonprofit that reports to the US Department of Commerce. Under a new agreement, that relationship will change, and ICANN's accountability goes global
The Internet was born of a big idea: Messages could be chopped into chunks, sent through a network in a series of transmissions, then reassembled by destination computers quickly and efficiently... The most important institutional force ... was the Pentagon's Advanced Research Projects Agency (ARPA) ... as ARPA began work on a groundbreaking computer network, the agency recruited scientists affiliated with the nation's top universities.