Transparency (human–computer interaction)

Last updated

Any change in a computing system, such as a new feature or new component, is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behaviour. The purpose is to shield from change all systems (or human users) on the other end of the interface. Confusingly, the term refers to overall invisibility of the component, it does not refer to visibility of component's internals (as in white box or open system). The term transparent is widely used in computing marketing in substitution of the term invisible, since the term invisible has a bad connotation (usually seen as something that the user can't see and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). The vast majority of the times, the term transparent is used in a misleading way to refer to the actual invisibility of a computing process, which is also described by the term opaque, especially with regards to data structures.[ citation needed ] Because of this misleading and counter-intuitive definition, modern computer literature tends to prefer use of "agnostic" over "transparent".

Contents

The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighbouring layer.

Also temporarily used later around 1969 in IBM and Honeywell programming manuals[ citation needed ] the term referred to a certain computer programming technique. An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. It was achieved through encapsulation – putting the code into modules that hid internal details, making them invisible for the main application.

Examples

For example, the Network File System is transparent, because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system, so the user might even not notice it while using the folder hierarchy. The early File Transfer Protocol (FTP) is considerably less transparent, because it requires each user to learn how to access files through an ftp client.

Similarly, some file systems allow transparent compression and decompression of data, enabling users to store more files on a medium without any special knowledge; some file systems encrypt files transparently. This approach does not require running a compression or encryption utility manually.

In software engineering, it is also considered good practice to develop or use abstraction layers for database access, so that the same application will work with different databases; here, the abstraction layer allows other parts of the program to access the database transparently (see Data Access Object, for example).

In object-oriented programming, transparency is facilitated through the use of interfaces that hide actual implementations done with different underlying classes.

Types of transparency in distributed system

Transparency means that any form of distributed system should hide its distributed nature from its users, appearing and functioning as a normal centralized system.

There are many types of transparency:

Formal definitions of most of these concepts can be found in RM-ODP, the Open Distributed Processing Reference Model (ISO 10746).

The degree to which these properties can or should be achieved may vary widely. Not every system can or should hide everything from its users. For instance, due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. If one expects real-time interaction with the distributed system, this may be very noticeable.

Related Research Articles

<span class="mw-page-title-main">Client–server model</span> Distributed application structure in computing

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web.

The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of the distributed object paradigm.

Middleware in the context of distributed applications is software that provides services beyond those provided by the operating system to enable the various components of a distributed system to communicate and manage data. Middleware supports and simplifies complex distributed applications. It includes web servers, application servers, messaging and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.

In software engineering and computer science, abstraction is:

The resource fork is a fork or section of a file on Apple's classic Mac OS operating system, which was also carried over to the modern macOS for compatibility, used to store structured data along with the unstructured data stored within the data fork.

Hardware abstractions are sets of routines in software that provide programs with access to hardware resources through programming interfaces. The programming interface allows all devices in a particular class C of hardware devices to be accessed through identical interfaces even though C may contain different subclasses of devices that each provide a different hardware interface.

Representational state transfer (REST) is a software architectural style that describes the architecture of the Web. It was derived from the following constraints:

Filesystem in Userspace (FUSE) is a software interface for Unix and Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a bridge to the actual kernel interfaces.

Network transparency, in its most general sense, refers to the ability of a protocol to transmit data over the network in a manner which is not observable to those using the applications that are using the protocol. In this way, users of a particular application may access remote resources in the same manner in which they would access their own local resources. An example of this is cloud storage, where remote files are presented as being locally accessible, and cloud computing where the resource in question is processing.

Distributed File System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS has two components to its service: Location transparency and Redundancy. Together, these components enable data availability in the case of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, the "DFS root".

<span class="mw-page-title-main">Architecture of Windows NT</span> Overview of the architecture of the Microsoft Windows NT line of operating systems

The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a layered design that consists of two main components, user mode and kernel mode. It is a preemptive, reentrant multitasking operating system, which has been designed to work with uniprocessor and symmetrical multiprocessor (SMP)-based computers. To process input/output (I/O) requests, they use packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting with Windows XP, Microsoft began making 64-bit versions of Windows available; before this, there were only 32-bit versions of these operating systems.

Remote File Sharing (RFS) is a Unix operating system component for sharing resources, such as files, devices, and file system directories, across a network, in a network-independent manner, similar to a distributed file system. It was developed at Bell Laboratories of AT&T in the 1980s, and was first delivered with UNIX System V Release 3 (SVR3). RFS relied on the STREAMS Transport Provider Interface feature of this operating system. It was also included in UNIX System V Release 4, but as that also included the Network File System (NFS) which was based on TCP/IP and more widely supported in the computing industry, RFS was little used. Some licensees of AT&T UNIX System V Release 4 did not include RFS support in SVR4 distributions, and Sun Microsystems removed it from Solaris 2.4.

In object-oriented design, the dependency inversion principle is a specific methodology for loosely coupled software modules. When following this principle, the conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are reversed, thus rendering high-level modules independent of the low-level module implementation details. The principle states:

The Spring Framework is an application framework and inversion of control container for the Java platform. The framework's core features can be used by any Java application, but there are extensions for building web applications on top of the Java EE platform. Although the framework does not impose any specific programming model, it has become popular in the Java community as an addition to the Enterprise JavaBeans (EJB) model. The Spring Framework is open source.

In computing, a shared resource, or network share, is a computer resource made available from one host to other hosts on a computer network. It is a device or piece of information on a computer that can be remotely accessed from another computer transparently as if it were a resource in the local machine. Network sharing is made possible by inter-process communication over the network.

<span class="mw-page-title-main">User (computing)</span> Person who uses a computer or network service

A user is a person who utilizes a computer or network service.

In computing, virtualization or virtualisation is the act of creating a virtual version of something at the same abstraction level, including virtual computer hardware platforms, storage devices, and computer network resources.

The Handle System is the Corporation for National Research Initiatives's proprietary registry assigning persistent identifiers, or handles, to information resources, and for resolving "those handles into the information necessary to locate, access, and otherwise make use of the resources".

gLite Grid computing software

gLite is a middleware computer software project for grid computing used by the CERN LHC experiments and other scientific domains. It was implemented by collaborative efforts of more than 80 people in 12 different academic and industrial research centers in Europe. gLite provides a framework for building applications tapping into distributed computing and storage resources across the Internet. The gLite services were adopted by more than 250 computing centres, and used by more than 15000 researchers in Europe and around the world.

A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.

References