The DCE Distributed File System (DCE/DFS) [1] is the remote file access protocol used with the Distributed Computing Environment. It was a variant of Andrew File System (AFS), based on the AFS Version 3.0 protocol that was developed commercially by Transarc Corporation. AFS Version 3.0 was in turn based on the AFS Version 2.0 protocol (also used by the Coda disconnected file system) originally developed at Carnegie Mellon University.
DCE/DFS consisted of multiple cooperative components that provided a network file system with strong file system semantics, attempting to mimic the behavior of POSIX local file systems while taking advantage of performance optimizations when possible. A DCE/DFS client system utilized a locally managed cache that would contain copies (or regions) of the original file. The client system would coordinate with a server system where the original copy of the file was stored to ensure that multiple clients accessing the same file would re-fetch a cached copy of the file data when the original file had changed.
The advantage of this approach is that it provided very good performance even over slow network connections because most of the file access was actually done to the local cached regions of the file. If the server failed, the client could continue making changes to the file locally, storing it back to the server when it became available again.
DCE/DFS also divorced the concept of logical units of management (Filesets) from the underlying volume on which the fileset was stored. In doing this it allowed administrative control of the location for the fileset in a manner that was transparent to the end user. To support this and other advanced DCE/DFS features, a local journaling file system (DCE/LFS also known as Episode) was developed to provide the full range of support options.
IBM has not maintained it since 2005: https://web.archive.org/web/20071009171709/http://www-306.ibm.com/software/stormgmt/dfs/
IBM was working on a replacement for DCE/DFS called ADFS (Advanced Distributed File System). One major goal of this project was to decouple DFS from the complexities of DCE's cell directory services (CDS) and security services (secd). Another key feature would have been the elimination of enctype limitations associated with DCE/RPC. No public mention of this effort has been made since 2005, leading many to believe the project has been killed.
The DCE Distributed File System (DFS) was adopted by the Open Software Foundation in 1989 as part of their Distributed Computing Environment.
In computing, a cache is a hardware or software component that stores data so that future requests for that data can be served faster; the data stored in a cache might be the result of an earlier computation or a copy of data stored elsewhere. A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store; thus, the more requests that can be served from the cache, the faster the system performs.
In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space, which is coded as if it were a normal (local) procedure call, without the programmer explicitly coding the details for the remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of client–server interaction, typically implemented via a request–response message-passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.
Samba is a free software re-implementation of the SMB networking protocol, and was originally developed by Andrew Tridgell. Samba provides file and print services for various Microsoft Windows clients and can integrate with a Microsoft Windows Server domain, either as a Domain Controller (DC) or as a domain member. As of version 4, it supports Active Directory and Microsoft Windows NT domains.
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems (Sun) in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call system. NFS is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol.
Rational ClearCase is a family of computer software tools that supports software configuration management (SCM) of source code and other software development assets. It also supports design-data management of electronic design artifacts, thus enabling hardware and software co-development. ClearCase includes revision control and forms the basis for configuration management at large and medium-sized businesses, accommodating projects with hundreds or thousands of developers. It is developed by IBM.
The Andrew File System (AFS) is a distributed file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. It was developed by Carnegie Mellon University as part of the Andrew Project. Originally named "Vice", "Andrew" refers to Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.
Network-attached storage (NAS) is a file-level computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. The term "NAS" can refer to both the technology and systems involved, or a specialized device built for such functionality.
In computing, the Distributed Computing Environment (DCE) software system was developed in the early 1990s from the work of the Open Software Foundation (OSF), a consortium that included Apollo Computer, IBM, Digital Equipment Corporation, and others. The DCE supplies a framework and a toolkit for developing client/server applications. The framework includes:
A diskless node is a workstation or personal computer without disk drives, which employs network booting to load its operating system from a server.
Distributed File System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS has two components to its service: Location transparency and Redundancy. Together, these components enable data availability in the case of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, the "DFS root".
In computing, a fileset is a set of computer files linked by defining property or common characteristic. There are different types of fileset though the context will usually give the defining characteristic. Sometimes it is necessary to explicitly state the fileset type to avoid ambiguity, an example is the emacs editor which explicitly mentions its Version Control (VC) fileset type to distinguish from its "named files" fileset type.
Gluster Inc. was a software company that provided an open source platform for scale-out public and private cloud storage. The company was privately funded and headquartered in Sunnyvale, California, with an engineering center in Bangalore, India. Gluster was funded by Nexus Venture Partners and Index Ventures. Gluster was acquired by Red Hat on October 7, 2011.
Transarc Corporation was a private Pittsburgh-based software company founded in 1989 by Jeffrey Eppinger, Michael L. Kazar, Alfred Spector, and Dean Thompson of Carnegie Mellon University.
A web desktop or webtop is a desktop environment embedded in a web browser or similar client application. A webtop integrates web applications, web services, client–server applications, application servers, and applications on the local client into a desktop environment using the desktop metaphor. Web desktops provide an environment similar to that of Windows, Mac, or a graphical user interface on Unix and Linux systems. It is a virtual desktop running in a web browser. In a webtop the applications, data, files, configuration, settings, and access privileges reside remotely over the network. Much of the computing takes place remotely. The browser is primarily used for display and input purposes.
In computing, Self-certifying File System (SFS) is a global and decentralized, distributed file system for Unix-like operating systems, while also providing transparent encryption of communications as well as authentication. It aims to be the universal distributed file system by providing uniform access to any available server, however, the usefulness of SFS is limited by the low deployment of SFS clients. It was developed in the June 2000 doctoral thesis of David Mazières.
A clustered file system is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system. Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance.
Microsoft Application Virtualization is an application virtualization and application streaming solution from Microsoft. It was originally developed by Softricity, a company based in Boston, Massachusetts, acquired by Microsoft on July 17, 2006. App-V represents Microsoft's entry to the application virtualization market, alongside their other virtualization technologies such as Hyper-V, Microsoft User Environment Virtualization (UE-V), Remote Desktop Services, and System Center Virtual Machine Manager.
Entera is a middleware product introduced in the mid-1990s by the Open Environment Corporation (OEC), an early implementation of the three-tiered client–server model development model. Entera viewed business software as a collection of services, rather than as a monolithic application.
Distributed Data Management Architecture (DDM) is IBM's open, published software architecture for creating, managing and accessing data on a remote computer. DDM was initially designed to support record-oriented files; it was extended to support hierarchical directories, stream-oriented files, queues, and system command processing; it was further extended to be the base of IBM's Distributed Relational Database Architecture (DRDA); and finally, it was extended to support data description and conversion. Defined in the period from 1980 to 1993, DDM specifies necessary components, messages, and protocols, all based on the principles of object-orientation. DDM is not, in itself, a piece of software; the implementation of DDM takes the form of client and server products. As an open architecture, products can implement subsets of DDM architecture and products can extend DDM to meet additional requirements. Taken together, DDM products implement a distributed file system.