Self-certifying File System

Last updated

In computing, Self-certifying File System (SFS) is a global and decentralized, distributed file system for Unix-like operating systems, while also providing transparent encryption of communications as well as authentication. It aims to be the universal distributed file system by providing uniform access to any available server, however, the usefulness of SFS is limited by the low deployment of SFS clients. It was developed in the June 2000 doctoral thesis of David Mazières.

Contents

Implementation

The SFS client daemon implements the Sun's Network File System (NFS) protocol for communicating with the operating system, and thus can work on any operating system that supports NFS, including Windows. [1] The client manages connections to remote file systems as necessary, acting as a kind of protocol translation layer. The SFS server works similarly to other distributed file system servers, by exposing an existing disk file system over the network, over the specific SFS protocol. On Unix-like systems, SFS file systems can usually be found at /sfs/hostname:hostID. When an SFS file system is first accessed through this path, a connection to the server is made and the directory is created ("automounted").

Differences

The primary motivation behind the file system is to address the shortcomings of hardwired, administratively configured distributed file systems in larger organizations, and various remote file transfer protocols. It is designed to operate securely between separate administrative realms. For example, with SFS, one could store all their files on a single remote server, and access the same files securely and transparently from any location as if they were stored locally, without any special privileges or administrative cooperation (other than running the SFS client daemon). Available file systems will be found at the same path regardless of physical location, and are implicitly authenticated by their path names as they include the public-key fingerprint of the server (hence why it is called "self-certifying"). [2]

In addition to the new perspective, SFS also addresses some commonly raised limitations of other distributed file systems. For example, NFS and SMB clients have to rely on the server for file system security policies, and NFS servers have to rely on the client computer for authentication. This often complicates security, as one compromised computer could breach the security of the entire organization. The NFS and SMB protocols also do not by themselves provide confidentiality (encryption) or tamper resistance from other computers on the network, without encapsulation layers such as IPsec.

Unlike Coda and AFS, SFS does not provide local caching of remote files and thus is more dependent on network reliability, latency and bandwidth.

See also

Related Research Articles

Kerberos is a computer-network authentication protocol that works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Its designers aimed it primarily at a client–server model, and it provides mutual authentication—both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.

In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space, which is coded as if it were a normal (local) procedure call, without the programmer explicitly coding the details for the remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of client–server interaction, typically implemented via a request–response message-passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.

The Secure Shell Protocol (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Its most notable applications are remote login and command-line execution.

Samba is a free software re-implementation of the SMB networking protocol, and was originally developed by Andrew Tridgell. Samba provides file and print services for various Microsoft Windows clients and can integrate with a Microsoft Windows Server domain, either as a Domain Controller (DC) or as a domain member. As of version 4, it supports Active Directory and Microsoft Windows NT domains.

In computing, a file server is a computer attached to a network that provides a location for shared disk access, i.e. storage of computer files that can be accessed by the workstations that are able to reach the computer that shares the access through a computer network. The term server highlights the role of the machine in the traditional client–server scheme, where the clients are the workstations using the storage. A file server does not normally perform computational tasks or run programs on behalf of its client workstations.

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems (Sun) in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call system. NFS is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol.

The Network Information Service, or NIS, is a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network. Sun Microsystems developed the NIS; the technology is licensed to virtually all other Unix vendors.

Server Message Block (SMB) is a communication protocol originally developed in 1983 by Barry A. Feigenbaum at IBM and intended to provide shared access to files and printers across nodes on a network of systems running IBM's OS/2. It also provides an authenticated inter-process communication (IPC) mechanism. In 1987, Microsoft and 3Com implemented SMB in LAN Manager for OS/2, at which time SMB used the NetBIOS service atop the NetBIOS Frames protocol as its underlying transport. Later, Microsoft implemented SMB in Windows NT 3.1 and has been updating it ever since, adapting it to work with newer underlying transports: TCP/IP and NetBT. SMB implementation consists of two vaguely named Windows services: "Server" and "Workstation". It uses NTLM or Kerberos protocols for user authentication.

<span class="mw-page-title-main">Virtual Network Computing</span> Graphical desktop-sharing system

Virtual Network Computing (VNC) is a graphical desktop-sharing system that uses the Remote Frame Buffer protocol (RFB) to remotely control another computer. It transmits the keyboard and mouse input from one computer to another, relaying the graphical-screen updates, over a network.

<span class="mw-page-title-main">Network-attached storage</span> Computer data storage server

Network-attached storage (NAS) is a file-level computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. The term "NAS" can refer to both the technology and systems involved, or a specialized device built for such functionality.

The Berkeley r-commands are a suite of computer programs designed to enable users of one Unix system to log in or issue commands to another Unix computer via TCP/IP computer network. The r-commands were developed in 1982 by the Computer Systems Research Group at the University of California, Berkeley, based on an early implementation of TCP/IP.

Windows Services for UNIX (SFU) is a discontinued software package produced by Microsoft which provided a Unix environment on Windows NT and some of its immediate successor operating-systems.

Distributed File System (DFS) is a set of client and server services that allow an organization using Microsoft Windows servers to organize many distributed SMB file shares into a distributed file system. DFS has two components to its service: Location transparency and Redundancy. Together, these components enable data availability in the case of failure or heavy load by allowing shares in multiple different locations to be logically grouped under one folder, the "DFS root".

<span class="mw-page-title-main">Bacula</span>

Bacula is an open-source, enterprise-level computer backup system for heterogeneous networks. It is designed to automate backup tasks that had often required intervention from a systems administrator or computer operator.

In computing, a shared resource, or network share, is a computer resource made available from one host to other hosts on a computer network. It is a device or piece of information on a computer that can be remotely accessed from another computer transparently as if it were a resource in the local machine. Network sharing is made possible by inter-process communication over the network.

A clustered file system is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system. Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance.

collectd

collectd is a Unix daemon that collects, transfers and stores performance data of computers and network equipment. The acquired data is meant to help system administrators maintain an overview over available resources to detect existing or looming bottlenecks.

References

  1. David Euresti (August 2002). "Self-Certifying File System Implementation for Windows" (PostScript). MIT . Retrieved 2006-12-23.
  2. David Mazières, M. Frans Kaashoek (September 1998). Escaping the Evils of Centralized Control with self-certifying pathnames (PostScript). Proceedings of the 8th ACM SIGOPS European workshop: Support for composing distributed applications. Sintra, Portugal: MIT . Retrieved 2006-12-23.