Developer(s) | MapR |
---|---|
Full name | MapR FS |
Introduced | 2011 with Linux |
Structures | |
Directory contents | B-tree |
File allocation | Multi-level B-tree |
Limits | |
Max volume size | unlimited |
Max file size | 16 EiB |
Max no. of files | unlimited |
Features | |
File system permissions | Standard Unix, Access Control expressions |
Transparent compression | Yes |
Transparent encryption | Yes |
Other | |
Supported operating systems | Linux |
The MapR File System (MapR FS) is a clustered file system that supports both very large-scale and high-performance uses. [1] MapR FS supports a variety of interfaces including conventional read/write file access via NFS and a FUSE interface, as well as via the HDFS interface used by many systems such as Apache Hadoop and Apache Spark. [2] [3] In addition to file-oriented access, MapR FS supports access to tables and message streams using the Apache HBase and Apache Kafka APIs, as well as via a document database interface.
First released in 2010, [4] MapR FS is now typically described as the MapR Converged Data Platform due to the addition of tabular and messaging interfaces. The same core technology is, however, used to implement all of these forms of persistent data storage and all of the interfaces are ultimately supported by the same server processes. To distinguish the different capabilities of the overall data platform, the term MapR FS is used more specifically to refer to the file-oriented interfaces, MapR DB or MapR JSON DB is used to refer to the tabular interfaces and MapR Streams is used to describe the message streaming capabilities.
MapR FS is a cluster filesystem that provides uniform access from files to other objects such as tables used as universal namespace accessible from any client of the system. Access control is also provided for files, tables and streams used as access control expressions, which is an extension of the more common (and limited) access control list that allow permissions from composed lists of allowed users or groups, but boolean instead allow combinations of user id and groups.
MapR FS was developed in 2009 by MapR Technologies to extend the capabilities of Apache Hadoop by providing a more performant and stable platform. The design of MapR FS is influenced by various other systems such as the Andrew File System (AFS). The concept of volumes in AFS has some strong similarity from the point of the view of users, although the implementation in MapR FS is completely different. One major difference between AFS and MapR FS is that the latter uses a strong consistency model while AFS provides only weak consistency.
To meet the original goals of supporting Hadoop programs, MapR FS supports the HDFS API by translating HDFS function calls into an internal API based on a custom remote procedure call (RPC) mechanism. The normal write-once model of HDFS is replaced in MapR FS by a fully mutable file system even when using the HDFS API. The ability to support file mutation allows the implementation of an NFS server that translates NFS operations into internal MapR RPC calls. Similar mechanisms are used to allow a Filesystem in Userspace (FUSE) interface and an approximate emulation of the Apache HBase API.
Files in MapR FS are internally implemented by splitting the file contents into chunks, typically each 256 MB in size although the size is specific to each file. Each chunk is written to containers which are the element of replication in the cluster. Containers are replicated and the replication is done by either linear fashion in which each replica forwards write operations to the next replica in line or in a star pattern in which the master replica forwards write operations to all other replicas at the same time. Writes are acknowledged by the master replica when all writes to all replicas complete. Internally, containers implement B-trees which are used at multiple levels such as to map file offset to chunk within a file or to map file offset to the correct 8kB block within a chunk.
These B-trees are also used to implement directories. A long hash of each file or directory name in the directory is used to find the child file or directory table.
A volume is a special data structure similar to a directory in many ways, except that it allows additional access control and management operations. A notable capability of volumes is that the nodes on which a volume may reside within a cluster can be restricted to control performance, particularly in heavily contended multi-tenant systems that are running a wide variety of workloads.
Proprietary technology is used in MapR FS to implement transactions in containers and to achieve consistent crash recovery.
Other features of the filesystem include: [5]
Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems (Sun) in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call system. NFS is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol.
The Andrew File System (AFS) is a distributed file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. It was developed by Carnegie Mellon University as part of the Andrew Project. Originally named "Vice", "Andrew" refers to Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.
Google File System is a proprietary distributed file system developed by Google to provide efficient, reliable access to data using large clusters of commodity hardware. Google file system was replaced by Colossus in 2010.
GPFS is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List. For example, it is the filesystem of the Summit at Oak Ridge National Laboratory which was the #1 fastest supercomputer in the world in the November 2019 TOP500 list of supercomputers. Summit is a 200 Petaflops system composed of more than 9,000 POWER9 processors and 27,000 NVIDIA Volta GPUs. The storage filesystem called Alpine has 250 PB of storage using Spectrum Scale on IBM ESS storage hardware, capable of approximately 2.5TB/s of sequential I/O and 2.2TB/s of random I/O.
Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use. It has since also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.
A clustered file system (CFS) is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system. Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance.
Chiron Filesystem is a fault-tolerant replication file system.
Ceph is a free and open-source software-defined storage platform that provides object storage, block storage, and file storage built on a common distributed cluster foundation. Ceph provides completely distributed operation without a single point of failure and scalability to the exabyte level, and is freely available. Since version 12 (Luminous), Ceph does not rely on any other conventional filesystem and directly manages HDDs and SSDs with its own storage backend BlueStore and can expose a POSIX filesystem.
HBase is an open-source non-relational distributed database modeled after Google's Bigtable and written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS or Alluxio, providing Bigtable-like capabilities for Hadoop. That is, it provides a fault-tolerant way of storing large quantities of sparse data.
Sector/Sphere is an open source software suite for high-performance distributed data storage and processing. It can be broadly compared to Google's GFS and MapReduce technology. Sector is a distributed file system targeting data storage over a large number of commodity computers. Sphere is the programming architecture framework that supports in-storage parallel data processing for data stored in Sector. Sector/Sphere operates in a wide area network (WAN) setting.
Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API. Since most data warehousing applications work with SQL-based querying languages, Hive aids the portability of SQL-based applications to Hadoop. While initially developed by Facebook, Apache Hive is used and developed by other companies such as Netflix and the Financial Industry Regulatory Authority (FINRA). Amazon maintains a software fork of Apache Hive included in Amazon Elastic MapReduce on Amazon Web Services.
In computing, a distributed file system (DFS) or network file system is any file system that allows access to files from multiple hosts sharing via a computer network. This makes it possible for multiple users on multiple machines to share files and storage resources.
Apache Drill is an open-source software framework that supports data-intensive distributed applications for interactive analysis of large-scale datasets. Built chiefly by contributions from developers from MapR, Drill is inspired by Google's Dremel system. Drill is an Apache top-level project. Tom Shiran is the founder of the Apache Drill Project. It was designated an Apache Software Foundation top-level project in December 2016.
Oracle NoSQL Database is a NoSQL-type distributed key-value database from Oracle Corporation. It provides transactional semantics for data manipulation, horizontal scalability, and simple administration and monitoring.
Quantcast File System (QFS) is an open-source distributed file system software package for large-scale MapReduce or other batch-processing workloads. It was designed as an alternative to the Apache Hadoop Distributed File System (HDFS), intended to deliver better performance and cost-efficiency for large-scale processing clusters.
A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.
Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming clusters with implicit data parallelism and fault tolerance. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since.
Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Originally designed by Google, the project is now maintained by the Cloud Native Computing Foundation.
LizardFS is an open source distributed file system that is POSIX-compliant and licensed under GPLv3. It was released in 2013 as fork of MooseFS. LizardFS is also offering a paid Technical Support with possibility of configurating and setting up the cluster and active cluster monitoring.