Internet area network

Last updated

An Internet area network (IAN) is a concept for a communications network [1] that connects voice and data endpoints within a cloud environment over IP, replacing an existing local area network (LAN), wide area network (WAN) or the public switched telephone network (PSTN).

Contents

Overview

An IAN securely connects endpoints through the public Internet to communicate and exchange information and data without being tied to a physical location.

The IAN eliminates a geographic profile for the entire network because the applications and communications services have become virtualized. Endpoints need to be connected only over a broadband connection across the Internet. Unlike IAN, LAN interconnects computers in a limited area, such as a home, a school, a computer laboratory, or an office building. The WAN also differs from the IAN because it is a network that covers a broad area, such as any telecommunications network that links across metropolitan, regional, or national boundaries, using private or public network transports.

Hosted in the cloud by a managed services provider, an IAN platform offers users secure access to information from anywhere, anytime, via an Internet connection. Users can access telephony, voicemail, email, and fax services from any connected endpoint. The hosted model reduces IT and communications expenses for businesses, protects against data loss and disaster downtime, and realizes a greater return on their invested resources through increased employee productivity and reduced telecom costs.

History

The IAN is rooted in the rise of cloud computing. The underlying concept dates back to the 1950s, when large-scale mainframes became available in academia and corporations, accessible via thin clients and terminal computers. [2] Because it was costly to buy a mainframe, it became essential to find ways to get the greatest return on the investment, allowing multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time, eliminating periods of inactivity, which became known in the industry as time-sharing. [3]

The increasing demand and use of computers in universities and research labs in the late 1960s generated the need for high-speed interconnections between computer systems. A 1970 report from the Lawrence Radiation Laboratory detailing the growth of their "Octopus" network gave a good indication of the situation. [4]

As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time-sharing, experimenting with algorithms to provide the optimal use of the infrastructure, platform, and applications with prioritized access to the CPU and efficiency for the end users.

John McCarthy opined in the 1960s that "computation may someday be organized as a public utility.". [5] Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry, and the use of public, private, government, and community forms were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. [6] Other scholars have shown that cloud computing's roots go back to the 1950s [7] when scientist Herb Grosch (the author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers. [8] Due to the expense of these powerful computers, many corporations and other entities could avail themselves of computing capability through time-sharing, and several organizations, such as GE's GEISCO, IBM subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and Bolt, Beranek, and Newman (BBN) marketed time-sharing as a commercial venture. [3]

The development of the Internet from being document-centric via semantic data towards more and more services was described as a "Dynamic Web." [9] This contribution focused on the need for better meta-data to explain implementation details and conceptual details of model-based applications.

In the 1990s, telecommunications companies that previously offered primarily dedicated point-to-point data circuits began offering virtual private network (VPN) services with comparable quality of service but at a much lower cost. By switching traffic to balance utilization as they saw fit, they were able to optimize their overall network usage. [10] The cloud symbol was used to denote the demarcation point between the provider's responsibility and the users' responsibility. Cloud computing extends this boundary to cover servers and the network infrastructure.

After the dot-com bubble, Amazon played a crucial role in the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any time to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" (teams small enough to be fed with two pizzas [11] ) could add new features faster and more efficiently, Amazon initiated a new product development effort to provide cloud computing to external customers and launched Amazon Web Services (AWS) on a utility computing basis in 2006. [12]

In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds and for the federation of clouds. [13] In the same year, efforts were focused on providing Quality of Service guarantees (as required by real-time interactive applications) to cloud-based infrastructures in the IRMOS European Commission-funded project's framework, resulting in a real-time cloud environment. [14] By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing... will result in dramatic growth in IT products in some areas and significant reductions in other areas." [15]

In 2011, RESERVOIR was established in Europe to create open-source technologies that allow cloud providers to build an advanced cloud by balancing workloads, lowering costs, and moving workloads across geographic locations through a federation of clouds. [16] Also, in 2011, IBM announced that the Smarter Computing framework would support a Smarter Planet. [17] Cloud computing is a critical piece among the various components of the Smarter Computing foundation.

Currently, The ubiquitous availability of high-capacity networks, low-cost computers, and storage devices and the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing have led to tremendous growth in cloud computing. Virtual worlds [18] and peer-to-peer architectures have paved the way for the concept of an IAN.

iAreaNet was founded in 1999 by CEO James DeCrescenzo as a company called Internet Area Network, devoted to providing offsite data storage and disaster prevention before the cloud existed in widely deployed commercial form. It pioneered the idea of an IAN.[ citation needed ] Since then, it has strengthened operations. It has made significant investments in developing a robust infrastructure to provide businesses with an array of technology solutions, including the patent-pending iAreaOffice, which commercializes the concept of an IAN by eliminating the need for traditional LAN, WAN, or telephone system for business communications.[ citation needed ]

See also

Notes

  1. Winkleman, Roy. "Chapter 1- What is a Network". An Educator's Guide to School Networks. Florida Center for Instructional Technology College of Education.
  2. Martínez-Mateo, J.; Munoz-Hernandez, S; Pérez-Rey, D (May 2010). "A Discussion of Thin Client Technology for Computer Labs". Proceedings of the Multi-Conference on Innovative Developments in ICT. University of Madrid. pp. 119–124. arXiv: 1005.2534 . doi:10.5220/0002962901190124. ISBN   978-989-8425-15-7. S2CID   5682218.
  3. 1 2 McCarthy, John. "Reminiscences on the History of Time Sharing". Stanford University. Archived from the original on 2007-10-20. Retrieved 2017-09-25.
  4. Mendicino, Samuel (1972). "OCTOPUS: THE LAWRENCE RADIATION LABORATORY NETWORK". In Randall Rustin (ed.). COMPUTER NETWORKS. Prentice-Hall. pp. 95–100. ISBN   9780131661080. Archived from the original on 2013-10-20.
  5. Garfinkel, Simson (October 3, 2011). "The Cloud Imperative". MIT Technology Review . Archived from the original on 2013-04-11. Retrieved 2013-05-31.
  6. Parkhill, Douglas F. (1966). The Challenge of the Computer Utility. Addison-Wesley Publishing Company. ISBN   9780201057201.
  7. Deboosere, L.; De Wachter, J.; Simoens, P.; De Turck, F.; Dhoedt, B.; Demeester, P. (2007). "Thin Client Computing Solutions in Low- and High-Motion Scenarios". International Conference on Networking and Services (ICNS '07). p. 38. doi:10.1109/ICNS.2007.115. S2CID   17209120.
  8. Gardner, W. David (April 12, 2005). "Author Of Grosch's Law Going Strong At 87". InformationWeek. Archived from the original on 2012-10-23.
  9. "A History of the Dynamic Web". Pingdom. December 7, 2007. Archived from the original on January 20, 2018. Retrieved May 31, 2013.
  10. "Virtual Private Networks: Managing Telecom's Golden Horde". Billing World. May 1, 1999.
  11. Anders, George (April 2012). "Inside Amazon's Idea Machine: How Bezos Decodes The Customer". Forbes.
  12. Arrington, Michael (November 14, 2006). "Interview with Jeff Bezos On Amazon Web Services". TechCrunch.
  13. "OpenNebula Website". 14 May 2013.
  14. "IRMOS Website". Archived from the original on 2018-10-10. Retrieved 2013-05-31.
  15. Plummer, Daryl (June 2008). "Cloud Computing Confusion Leads to Opportunity". Gartner.
  16. "RESERVOIR Website".
  17. "IBM Smarter Planet Home Page". IBM . October 2015.
  18. Naone, Erica (April 16, 2008). "Peer to Peer Virtual Worlds". MIT Technology Review. Archived from the original on 2012-10-21.

Related Research Articles

<span class="mw-page-title-main">Client–server model</span> Distributed application structure in computing

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may be on the same device. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

<span class="mw-page-title-main">Thin client</span> Non-powerful computer optimized for remote server access

In computer networking, a thin client, sometimes called slim client or lean client, is a simple (low-performance) computer that has been optimized for establishing a remote connection with a server-based computing environment. They are sometimes known as network computers, or in their simplest form as zero clients. The server does most of the work, which can include launching software programs, performing calculations, and storing data. This contrasts with a rich client or a conventional personal computer; the former is also intended for working in a client–server model but has significant local processing power, while the latter aims to perform its function mostly locally.

<span class="mw-page-title-main">Time-sharing</span> Computing resource shared by concurrent users

In computing, time-sharing is the concurrent sharing of a computing resource among many tasks or users by giving each task or user a small slice of processing time. This quick switch between tasks or users gives the illusion of simultaneous execution. It enables multi-tasking by a single user or enables multiple-user sessions.

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.

<span class="mw-page-title-main">Web hosting service</span> Service for hosting websites

A web hosting service is a type of Internet hosting service that hosts websites for clients, i.e. it offers the facilities required for them to create and maintain a site and makes it accessible on the World Wide Web. Companies providing web hosting services are sometimes called web hosts.

In telecommunications networks, a node is either a redistribution point or a communication endpoint.

<span class="mw-page-title-main">Amazon Web Services</span> On-demand cloud computing company

Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered, pay-as-you-go basis. Clients will often use this in combination with autoscaling. These cloud computing web services provide various services related to networking, compute, storage, middleware, IoT and other processing capacity, as well as software tools via AWS server farms. This frees clients from managing, scaling, and patching hardware and operating systems. One of the foundational services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, with extremely high availability, which can be interacted with over the internet via REST APIs, a CLI or the AWS console. AWS's virtual computers emulate most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk (HDD)/SSD storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, and customer relationship management (CRM).

Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing, the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented.

<span class="mw-page-title-main">Home network</span> Type of computer network

A home network or home area network (HAN) is a type of computer network that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment.

A clustered file system (CFS) is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system. Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance.

Distributed networking is a distributed computing network system where components of the program and data depend on multiple sources.

<span class="mw-page-title-main">Rackspace Cloud</span> Cloud computing platform

The Rackspace Cloud is a set of cloud computing products and services billed on a utility computing basis from the US-based company Rackspace. Offerings include Cloud Storage, virtual private server, load balancers, databases, backup, and monitoring.

<span class="mw-page-title-main">Cloud computing</span> Form of shared internet-based computing

"Cloud computing is a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand." according to ISO.

Dynamic Infrastructure is an information technology concept related to the design of data centers, whereby the underlying hardware and software can respond dynamically and more efficiently to changing levels of demand. In other words, data center assets such as storage and processing power can be provisioned to meet surges in user's needs. The concept has also been referred to as Infrastructure 2.0 and Next Generation Data Center.

<span class="mw-page-title-main">Amazon Virtual Private Cloud</span> Cloud-based service

Amazon Virtual Private Cloud (VPC) is a commercial cloud computing service that provides a virtual private cloud, by provisioning a logically isolated section of Amazon Web Services (AWS) Cloud. Enterprise customers can access the Amazon Elastic Compute Cloud (EC2) over an IPsec based virtual private network. Unlike traditional EC2 instances which are allocated internal and external IP numbers by Amazon, the customer can assign IP numbers of their choosing from one or more subnets.

<span class="mw-page-title-main">Virtual private cloud</span> Pool of shared resources allocated within a public cloud environment

A virtual private cloud (VPC) is an on-demand configurable pool of shared resources allocated within a public cloud environment, providing a certain level of isolation between the different organizations using the resources. The isolation between one VPC user and all other users of the same cloud is achieved normally through allocation of a private IP subnet and a virtual communication construct per user. In a VPC, the previously described mechanism, providing isolation within the cloud, is accompanied with a virtual private network (VPN) function that secures, by means of authentication and encryption, the remote access of the organization to its VPC resources. With the introduction of the described isolation levels, an organization using this service is in effect working on a 'virtually private' cloud, and hence the name VPC.

<span class="mw-page-title-main">Wyse</span> American computing system manufacturer

Wyse Technology, Inc., or simply Wyse, was an independent American manufacturer of cloud computing systems. Wyse are best remembered for their video terminal line introduced in the 1980s, which competed with the market-leading Digital. They also had a successful line of IBM PC compatible workstations in the mid-to-late 1980s. But starting late in the decade, Wyse were outcompeted by companies such as eventual parent Dell. Current products include thin client hardware and software as well as desktop virtualization solutions. Other products include cloud software-supporting desktop computers, laptops, and mobile devices. Dell Cloud Client Computing is partnered with IT vendors such as Citrix, IBM, Microsoft, and VMware.

<span class="mw-page-title-main">Cloud computing architecture</span>

Cloud computing architecture refers to the components and subcomponents required for cloud computing. These components typically consist of a front end platform, back end platforms, a cloud based delivery, and a network. Combined, these components make up cloud computing architecture.

<span class="mw-page-title-main">Teradici</span> Canadian software company

Teradici Corporation was a privately held software company founded in 2004, which was acquired by HP Inc. in October 2021. Teradici initially developed a protocol (PCoIP) for compressing and decompressing images and sound when remotely accessing blade servers, and implemented it in hardware. This technology was later expanded to thin clients/zero clients for general Virtual Desktop Infrastructure. Teradici's protocol or hardware is used by HP, Dell-Wyse, Amulet Hotkey, Samsung, Amazon Web Services, Fujitsu, and VMware.

The third platform is a term coined by marketing firm International Data Corporation (IDC) for a model of a computing platform. It was promoted as inter-dependencies between mobile computing, social media, cloud computing, and information / analytics, and possibly the Internet of things. The term was in use in 2013, and possibly earlier. Gartner claimed that these interdependent trends were "transforming the way people and businesses relate to technology" and have since provided a number of reports on the topic.

References