Browser-based computing

Last updated

Browser-based computing is the use of the web browsers to perform computing tasks. Opportunities for computing on the Web have been noted as far back as 1997. [1] Computing over the web was described in 2000. [2] Applications include distributed computing for web workers as illustrated by James (formerly CrowdProcess) and HASH, the use of the browser's stack in QMachine, [3] the embedding of web applications as semantic hypermedia components [4] and the Signaling Server in Peer-to-peer networks set via WebRTC. [5] Browser-based computing complements cloud computing, because they reduce server-side computational load, often using cloud-hosted, RESTful web services.

Related Research Articles

<span class="mw-page-title-main">Client–server model</span> Distributed application structure in computing

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

<span class="mw-page-title-main">Peer-to-peer</span> Type of decentralized and distributed network architecture

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network, forming a peer-to-peer network of nodes.

In telecommunication, provisioning involves the process of preparing and equipping a network to allow it to provide new services to its users. In National Security/Emergency Preparedness telecommunications services, "provisioning" equates to "initiation" and includes altering the state of an existing priority service or capability.

Hypermedia, an extension of the term hypertext, is a nonlinear medium of information that includes graphics, audio, video, plain text and hyperlinks. This designation contrasts with the broader term multimedia, which may include non-interactive linear presentations as well as hypermedia. It is also related to the field of electronic literature. The term was first used in a 1965 article written by Ted Nelson.

REST is a software architectural style that was created to guide the design and development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave. The REST architectural style emphasises uniform interfaces, independent deployment of components, the scalability of interactions between them, and creating a layered architecture to promote caching to reduce user-perceived latency, enforce security, and encapsulate legacy systems.

Datagram Transport Layer Security (DTLS) is a communications protocol providing security to datagram-based applications by allowing them to communicate in a way designed to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the stream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security guarantees. The DTLS protocol datagram preserves the semantics of the underlying transport—the application does not suffer from the delays associated with stream protocols, but because it uses UDP or SCTP, the application has to deal with packet reordering, loss of datagram and data larger than the size of a datagram network packet. Because DTLS uses UDP or SCTP rather than TCP, it avoids the "TCP meltdown problem", when being used to create a VPN tunnel.

A semantic web service, like conventional web services, is the server end of a client–server system for machine-to-machine interaction via the World Wide Web. Semantic services are a component of the semantic web because they use markup which makes data machine-readable in a detailed and sophisticated way.

<span class="mw-page-title-main">Edge computing</span> Distributed computing paradigm

Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.

<span class="mw-page-title-main">Carole Goble</span> British computer scientist

Carole Anne Goble, is a British academic who is Professor of Computer Science at the University of Manchester. She is principal investigator (PI) of the myGrid, BioCatalogue and myExperiment projects and co-leads the Information Management Group (IMG) with Norman Paton.

Amit Sheth is a computer scientist at University of South Carolina in Columbia, South Carolina. He is the founding Director of the Artificial Intelligence Institute, and a Professor of Computer Science and Engineering. From 2007 to June 2019, he was the Lexis Nexis Ohio Eminent Scholar, director of the Ohio Center of Excellence in Knowledge-enabled Computing, and a Professor of Computer Science at Wright State University. Sheth's work has been cited by over 48,800 publications. He has an h-index of 106, which puts him among the top 100 computer scientists with the highest h-index. Prior to founding the Kno.e.sis Center, he served as the director of the Large Scale Distributed Information Systems Lab at the University of Georgia in Athens, Georgia.

<span class="mw-page-title-main">Cloud computing</span> Form of shared Internet-based computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-go model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

Peer-to-peer web hosting is using peer-to-peer networking to distribute access to webpages. This is differentiated from the client–server model which involves the distribution of web data between dedicated web servers and user-end client computers. Peer-to-peer web hosting may also take the form of P2P web caches and content delivery networks.

CL-HTTP is a web server, client and proxy written in Common Lisp. It is based on its own web application framework. It was written by John C. Mallery "in about 10 days" starting in 1994 on a Symbolics Lisp Machine. In the same year a port to Macintosh Common Lisp was done. In 1996 CL-HTTP became the first web server to support the HTTP 1.1 protocol. It runs on Unix, Linux, BSD variants, Mac OS X, Solaris, Symbolics Genera and Microsoft Windows.

WebRTC is a free and open-source project providing web browsers and mobile applications with real-time communication (RTC) via application programming interfaces (APIs). It allows audio and video communication to work inside web pages by allowing direct peer-to-peer communication, eliminating the need to install plugins or download native apps.

A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.

Web Tasking is a set of web user interactions in hypermedia purposefully conducted for the performance of tasks. Web tasking is a term coined intentionally to contrast web browsing.

<span class="mw-page-title-main">InterPlanetary File System</span> Content-addressable, peer-to-peer hypermedia distribution protocol

The InterPlanetary File System (IPFS) is a protocol, hypermedia and file sharing peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting IPFS hosts.

Social navigation is a form of social computing introduced by Paul Dourish and Matthew Chalmers in 1994, who defined it as when "movement from one item to another is provoked as an artifact of the activity of another or a group of others". According to later research in 2002, "social navigation exploits the knowledge and experience of peer users of information resources" to guide users in the information space, and that it is becoming more difficult to navigate and search efficiently with all the digital information available from the World Wide Web and other sources. Studying others' navigational trails and understanding their behavior can help improve one's own search strategy by guiding them to make more informed decisions based on the actions of others.

Dew computing is an information technology (IT) paradigm that combines the core concept of cloud computing with the capabilities of end devices. It is used to enhance the experience for the end user in comparison to only using cloud computing. Dew computing attempts to solve major problems related to cloud computing technology, such as reliance on internet access. Dropbox is an example of the dew computing paradigm, as it provides access to the files and folders in the cloud in addition to keeping copies on local devices. This allows the user to access files during times without an internet connection; when a connection is established again, files and folders are synchronized back to the cloud server.

<span class="mw-page-title-main">Beaker (web browser)</span> Web browser software

Beaker is a discontinued free and open-source web browser developed by Blue Link Labs. Beaker Browser peer-to-peer technology allows users to self-publish websites and web apps directly from the browser, without the need to set up and administrate a separate web server or host their content on a third-party server. All files and websites are transferred using Dat, a hypermedia peer-to-peer protocol, which allows files to be shared and hosted by several users. The browser also supports the HTTP protocol to connect to traditional servers.

References

  1. Furmanski W (1997). "Petaops and Exaops: Super-computing on the Web". IEEE Internet Computing. 1 (2): 38–46. doi:10.1109/4236.601097.
  2. Fox G (2001). "Introduction to Web computing". Computing in Science & Engineering. 3 (2): 52–53. doi:10.1109/mcise.2001.909002.
  3. Wilkinson SR, Almeida JS (2014). "QMachine: commodity supercomputing in web browsers". BMC Bioinformatics. 15: 176. doi: 10.1186/1471-2105-15-176 . PMC   4063228 . PMID   24913605.
  4. Verborgh R (2014). "Serendipitous web applications through semantic hypermedia" (PDF). Sort. 100.
  5. "WebRTC".