Distributed Computing Environment

Last updated

The Distributed Computing Environment (DCE) is a software system developed in the early 1990s from the work of the Open Software Foundation (OSF), a consortium founded in 1988 that included Apollo Computer (part of Hewlett-Packard from 1989), IBM, Digital Equipment Corporation, and others. [1] [2] The DCE supplies a framework and a toolkit for developing client/server applications. [3] The framework includes:

Contents

The DCE did not achieve commercial success.

History

As part of the formation of OSF, various members contributed many of their ongoing research projects as well as their commercial products. For example, HP/Apollo contributed its Network Computing Environment (NCS) and CMA Threads products. Siemens Nixdorf contributed its X.500 server and ASN/1 compiler tools. At the time, network computing was quite popular, and many of the companies involved were working on similar RPC-based systems. By integrating security, RPC and other distributed services on a single distributed computing environment, OSF could offer a major advantage over SVR4, allowing any DCE-supporting system (namely OSF/1) to interoperate in a larger network.

The DCE system was, to a large degree, based on independent developments made by each of the partners. DCE/RPC was derived from the Network Computing System (NCS) created at Apollo Computer. The naming service was derived from work done at Digital. DCE/DFS was based on the Andrew File System (AFS) originally developed at Carnegie Mellon University. The authentication system was based on Kerberos. By combining these features, DCE offers a fairly complete system for network computing. Any machine on the network can authenticate its users, gain access to resources, and call them remotely using a single integrated API.

The rise of the Internet, Java and web services stole much of DCE's mindshare through the mid-to-late 1990s, and competing systems such as CORBA appeared as well.

One of the major uses of DCE today is Microsoft's DCOM and ODBC systems, which use DCE/RPC (in MSRPC) as their network transport layer.[ citation needed ]

OSF and its projects eventually became part of The Open Group, which released DCE 1.2.2 under a free software license (the LGPL) on 12 January 2005. DCE 1.1 was available much earlier under the OSF BSD license, and resulted in FreeDCE being available since 2000. FreeDCE contains an implementation of DCOM.

One of the major systems built on top of DCE was Encina, developed by Transarc (later acquired by IBM). IBM used Encina as a foundation to port its primary mainframe transaction processing system (CICS) to non-mainframe platforms, as IBM TXSeries. (However, later versions of TXSeries have removed the Encina component.)

Architecture

The largest unit of management in DCE is a cell. The highest privileges within a cell are assigned to a role called cell administrator, normally assigned to the "user" cell_admin. Multiple cells can be configured to communicate and share resources with each other. All principals from external cells are treated as "foreign" users and privileges can be awarded or removed accordingly. In addition to this, specific users or groups can be assigned privileges on any DCE resource, something which is not possible with the traditional UNIX filesystem, which lacks ACL's.

Major components of DCE within every cell are:

  1. The Security Server that is responsible for authentication
  2. The Cell Directory Server (CDS) that is the repository of resources and ACLs and
  3. The Distributed Time Server that provides an accurate clock for proper functioning of the entire cell

Modern DCE implementations such as IBM's are fully capable of interoperating with Kerberos as the security server, LDAP for the CDS and the Network Time Protocol implementations for the time server.

DCE/DFS is a DCE-based application which provides a distributed filesystem on DCE. DCE/DFS can support replicas of a fileset (the DCE/DFS equivalent of a filesystem) on multiple DFS servers - there is one read-write copy and zero or more read only copies. Replication is supported between the read-write and the read-only copies. In addition, DCE/DFS also supports what are called "backup" filesets, which if defined for a fileset are capable of storing a version of the fileset as it was prior to the last replication.

DCE/DFS is believed to be the world's only distributed filesystem that correctly implements the full POSIX filesystem semantics, including byte range locking. DCE/DFS was sufficiently reliable and stable to be utilised by IBM to run the back-end filesystem for the 1996 Olympics web site, seamlessly and automatically distributed and edited worldwide in different time zones.

Related Research Articles

In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space, which is written as if it were a normal (local) procedure call, without the programmer explicitly writing the details for the remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of client–server interaction, typically implemented via a request–response message-passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems (Sun) in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call system. NFS is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol.

Distributed Component Object Model (DCOM) is a proprietary Microsoft technology for communication between software components on networked computers. DCOM, which originally was called "Network OLE", extends Microsoft's COM, and provides the communication substrate under Microsoft's COM+ application server infrastructure.

<span class="mw-page-title-main">Interface description language</span> Computer language used to describe a software components interface

An interface description language or interface definition language (IDL), is a generic term for a language that lets a program or object written in one language communicate with another program written in an unknown language. IDLs are usually used to describe data types and interfaces in a language-independent way, for example, between those written in C++ and those written in Java.

<span class="mw-page-title-main">Universally unique identifier</span> Label used for information in computer systems

A Universally Unique Identifier (UUID) is a 128-bit label used for information in computer systems. The term Globally Unique Identifier (GUID) is also used, mostly in Microsoft systems.

The Andrew File System (AFS) is a distributed file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. It was developed by Carnegie Mellon University as part of the Andrew Project. Originally named "Vice", "Andrew" refers to Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.

<span class="mw-page-title-main">Project Athena</span> 1983 joint project by MIT, IBM and DEC

Project Athena was a joint project of MIT, Digital Equipment Corporation, and IBM to produce a campus-wide distributed computing environment for educational use. It was launched in 1983, and research and development ran until June 30, 1991. As of 2023, Athena is still in production use at MIT. It works as software that makes a machine a thin client, that will download educational applications from the MIT servers on demand.

Distributed File System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS has two components to its service: Location transparency and Redundancy. Together, these components enable data availability in the case of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, the "DFS root".

In computing, a fileset is a set of computer files linked by defining property or common characteristic. There are different types of fileset though the context will usually give the defining characteristic. Sometimes it is necessary to explicitly state the fileset type to avoid ambiguity, an example is the emacs editor which explicitly mentions its Version Control (VC) fileset type to distinguish from its "named files" fileset type.

Microsoft RPC is a modified version of DCE/RPC. Additions include partial support for UCS-2 strings, implicit handles, and complex calculations in the variable-length string and structure paradigms already present in DCE/RPC.

DCE/RPC, short for "Distributed Computing Environment / Remote Procedure Calls", is the remote procedure call system developed for the Distributed Computing Environment (DCE). This system allows programmers to write distributed software as if it were all working on the same computer, without having to worry about the underlying network code.

DCEThreads is an implementation of POSIX Draft 4 threads. DCE Stands for "Distributed Computing Environment" DCEThreads allowed users to create multiple avenues of execution in a single process. It is based on pthreads interface.

The DCE Distributed File System (DCE/DFS) is the remote file access protocol used with the Distributed Computing Environment. It was a variant of Andrew File System (AFS), based on the AFS Version 3.0 protocol that was developed commercially by Transarc Corporation. AFS Version 3.0 was in turn based on the AFS Version 2.0 protocol originally developed at Carnegie Mellon University.

The Network Computing System (NCS) was an implementation of the Network Computing Architecture (NCA). It was created at Apollo Computer in the 1980s. It comprised a set of tools for implementing distributed software applications, or distributed computing.

Transarc Corporation was a private Pittsburgh-based software company founded in 1989 by Jeffrey Eppinger, Michael L. Kazar, Alfred Spector, and Dean Thompson of Carnegie Mellon University.

In computer systems, an access token contains the security credentials for a login session and identifies the user, the user's groups, the user's privileges, and, in some cases, a particular application. In some instances, one may be asked to enter an access token rather than the usual password.

Security Support Provider Interface (SSPI) is a component of Windows API that performs security-related operations such as authentication.

In computer security, pass the hash is a hacking technique that allows an attacker to authenticate to a remote server or service by using the underlying NTLM or LanMan hash of a user's password, instead of requiring the associated plaintext password as is normally the case. It replaces the need for stealing the plaintext password to gain access with stealing the hash.

Encina was a DCE-based transaction processing system developed by Transarc, which was later acquired by IBM.

References

  1. Weijia Jia; Wanlei Zhou (15 December 2004). Distributed Network Systems: From Concepts to Implementations. Springer Science & Business Media. p. 135. ISBN   978-0-387-23839-5.
  2. PRADEEP K. SINHA (1 January 1998). DISTRIBUTED OPERATING SYSTEMS: CONCEPTS AND DESIGN. PHI Learning Pvt. Ltd. p. 35. ISBN   978-81-203-1380-4.
  3. Hans-Arno Jacobsen (30 November 2003). Distributed Infrastructure Support for Electronic Commerce Applications. Springer Science & Business Media. p. 14. ISBN   978-1-4020-7648-0.