Comparison of distributed file systems

Last updated

In computing, a distributed file system (DFS) or network file system is any file system that allows access from multiple hosts to files shared via a computer network. This makes it possible for multiple users on multiple machines to share files and storage resources.

Contents

Distributed file systems differ in their performance, mutability of content, handling of concurrent writes, handling of permanent or temporary loss of nodes or storage, and their policy of storing content.

Locally managed

FOSS

ClientWritten inLicenseAccess API High availability Shards Efficient Redundancy Redundancy GranularityInitial release yearMemory requirements (GB)
Alluxio (Virtual Distributed File System) JavaApache License 2.0 HDFS, FUSE, HTTP/REST, S3 hot standbyNoReplication [1] File [2] 2013
Ceph C++LGPLlibrados (C, C++, Python, Ruby), S3, Swift, FUSE YesYesPluggable erasure codes [3] Pool [4] 20101 per TB of storage
Coda CGPLCYesYesReplicationVolume [5] 1987
GlusterFS CGPLv3libglusterfs, FUSE, NFS, SMB, Swift, libgfapimirrorYesReed-Solomon [6] Volume [7] 2005
HDFS JavaApache License 2.0Java and C client, HTTP, FUSE [8] transparent master failoverNoReed-Solomon [9] File [10] 2005
IPFS GoApache 2.0 or MIT HTTP gateway, FUSE, Go client, Javascript client, command line tool Yeswith IPFS Cluster Replication [11] Block [12] 2015 [13]
LizardFS [14] C++GPLv3 POSIX, FUSE, NFS-Ganesha, Ceph FSAL (via libcephfs) masterNoReed-Solomon [15] File [16] 2013
Lustre CGPLv2 POSIX, NFS-Ganesha, NFS, SMB YesYesNo redundancy [17] [18] No redundancy [19] [20] 2003
MinIO GoAGPL3.0 AWS S3 API, FTP, SFTP YesYesReed-Solomon [21] Object [22] 2014
MooseFS CGPLv2 POSIX, FUSE masterNoReplication [23] File [24] 2008
OpenAFS CIBM Public License Virtual file system, Installable File System ReplicationVolume [25] 2000 [26]
OpenIO [27] CAGPLv3 / LGPLv3Native (Python, C, Java), HTTP/REST, S3, Swift, FUSE (POSIX, NFS, SMB, FTP)YesPluggable erasure codes [28] Object [29] 20150.5
Quantcast File System CApache License 2.0C++ client, FUSE (C++ server: MetaServer and ChunkServer are both in C++)masterNoReed-Solomon [30] File [31] 2012
RozoFS C, PythonGPLv2 FUSE, SMB, NFS, key/valueYesMojette [32] Volume [33] 2011 [34]
Tahoe-LAFS Python GNU GPL [35] HTTP (browser or CLI), SFTP, FTP, FUSE via SSHFS, pyfilesystemReed-Solomon [36] File [37] 2007
XtreemFS Java, C++BSD Licenselibxtreemfs (Java, C++), FUSE Replication [38] File [39] 2009

Proprietary

ClientWritten inLicenseAccess API
BeeGFS C / C++FRAUNHOFER FS (FhGFS) EULA, [40]

GPLv2 client

POSIX
ObjectiveFS [41] C Proprietary POSIX, FUSE
Spectrum Scale (GPFS) C, C++ Proprietary POSIX, NFS, SMB, Swift, S3, HDFS
MapR-FS C, C++ Proprietary POSIX, NFS, FUSE, S3, HDFS, CLI
Isilon OneFS C/C++ Proprietary POSIX, NFS, SMB/CIFS, HDFS, HTTP, FTP, SWIFT Object, CLI, Rest API
Qumulo C/C++ Proprietary POSIX, NFS, SMB/CIFS, CLI, S3, Rest API
Scality C Proprietary FUSE, NFS, REST, AWS S3

Remote access

NameRun byAccess API
Amazon S3 Amazon.com HTTP (REST/SOAP)
Google Cloud Storage Google HTTP (REST)
SWIFT (part of OpenStack) Rackspace, Hewlett-Packard, others HTTP (REST)
Microsoft Azure Microsoft HTTP (REST)
IBM Cloud Object Storage IBM (formerly Cleversafe) [42] HTTP (REST)

Comparison

Some researchers have made a functional and experimental analysis of several distributed file systems including HDFS, Ceph, Gluster, Lustre and old (1.6.x) version of MooseFS, although this document is from 2013 and a lot of information are outdated (e.g. MooseFS had no HA for Metadata Server at that time). [43]

The cloud based remote distributed storage from major vendors have different APIs and different consistency models. [44]

See also

References

  1. "Caching: Managing Data Replication in Alluxio".
  2. "Caching: Managing Data Replication in Alluxio".
  3. "Erasure Code Profiles".
  4. "Pools".
  5. Satyanarayanan, Mahadev; Kistler, James J.; Kumar, Puneet; Okasaki, Maria E.; Siegel, Ellen H.; Steere, David C. "Coda: A Highly Available File System for a Distributed Workstation Environment" (PDF).{{cite journal}}: Cite journal requires |journal= (help)
  6. "Erasure coding implementation". GitHub . 2 November 2021.
  7. "Setting up GlusterFS Volumes".
  8. "MountableHDFS".
  9. "HDFS-7285 Erasure Coding Support inside HDFS".
  10. "Apache Hadoop: setrep".
  11. Erasure coding plan: "Reed-Solomon layer over IPFS #196". GitHub ., "Erasure Coding Layer #6". GitHub .
  12. "CLI Commands: ipfs bitswap wantlist".
  13. "Why The Internet Needs IPFS Before It's Too Late". 4 October 2015.
  14. "Is LizardFS development still alive?". GitHub .
  15. "Configuring Replication Modes".
  16. "Configuring Replication Modes: Set and show the goal of a file/directory".
  17. "Lustre Operations Manual: What a Lustre File System Is (and What It Isn't)".
  18. Reed-Solomon in progress: "LU-10911 FLR2: Erasure coding".
  19. "Lustre Operations Manual: Lustre Features".
  20. File-level redundancy plan: "File Level Redundancy Solution Architecture".
  21. "MinIO Erasure Code Quickstart Guide".
  22. "MinIO Storage Class Quickstart Guide". GitHub .
  23. Only available in the proprietary version 4.x "[feature] erasure-coding #8". GitHub .
  24. "mfsgoal(1)".
  25. "Replicating Volumes (Creating Read-only Volumes)".
  26. "OpenAFS".
  27. "OpenIO SDS Documentation". docs.openio.io.
  28. "Erasure Coding".
  29. "Declare Storage Policies".
  30. "The Quantcast File System" (PDF).
  31. "qfs/src/cc/tools/cptoqfs_main.cc". GitHub . 8 December 2021.
  32. "About RozoFS: Mojette Transform".
  33. "Setting up RozoFS: Exportd Configuration File".
  34. "Initial commit". GitHub .
  35. "About Tahoe-LAFS". GitHub . 24 February 2022.
  36. "zfec -- a fast C implementation of Reed-Solomon erasure coding". GitHub . 24 February 2022.
  37. "Tahoe-LAFS Architecture: File Encoding".
  38. "Under the Hood: File Replication".
  39. "Quickstart: Replicate A File".
  40. "FRAUNHOFER FS (FhGFS) END USER LICENSE AGREEMENT". Fraunhofer Society. 2012-02-22.
  41. "ObjectiveFS official website".
  42. "IBM Plans to Acquire Cleversafe for Object Storage in Cloud". www-03.ibm.com. 2015-10-05. Archived from the original on October 8, 2015. Retrieved 2019-05-06.
  43. Séguin, Cyril; Depardon, Benjamin; Le Mahec, Gaël. "Analysis of Six Distributed File Systems" (PDF). HAL.
  44. "Data Consistency Models of Public Cloud Storage Services: Amazon S3, Google Cloud Storage and Windows Azure Storage". SysTutorials. 4 February 2014. Retrieved 19 June 2017.