Object storage

Last updated

Object storage (also known as object-based storage [1] or blob storage) is a computer data storage approach that manages data as "blobs" or "objects", as opposed to other storage architectures like file systems, which manage data as a file hierarchy, and block storage, which manages data as blocks within sectors and tracks. [2] Each object is typically associated with a variable amount of metadata, and a globally unique identifier. Object storage can be implemented at multiple levels, including the device level (object-storage device), the system level, and the interface level. In each case, object storage seeks to enable capabilities not addressed by other storage architectures, like interfaces that are directly programmable by the application, a namespace that can span multiple instances of physical hardware, and data-management functions like data replication and data distribution at object-level granularity.

Contents

Object storage systems allow retention of massive amounts of unstructured data in which data is written once and read once (or many times). [3] Object storage is used for purposes such as storing objects like videos and photos on Facebook, songs on Spotify, or files in online collaboration services, such as Dropbox. [4] One of the limitations with object storage is that it is not intended for transactional data, as object storage was not designed to replace NAS file access and sharing; it does not support the locking and sharing mechanisms needed to maintain a single, accurately updated version of a file. [3]

History

Origins

Jim Starkey coined the term "blob"[ when? ] working at Digital Equipment Corporation to refer to opaque data entities. The terminology was adopted for Rdb/VMS. "Blob" is often humorously explained to be an abbreviation for "binary large object". According to Starkey, this backronym arose when Terry McKiever, working in marketing at Apollo Computer felt that the term needed to be an abbreviation. McKiever began using the expansion "Basic Large Object". This was later eclipsed by the retroactive explanation of blobs as "Binary Large Objects". According to Starkey, "Blob don't stand for nothin'." Rejecting the acronym, he explained his motivation behind the coinage, saying, "A blob is the thing that ate Cincinnatti[ sic ], Cleveland, or whatever," referring to the 1958 science fiction film The Blob . [5]

In 1995, research led by Garth Gibson on Network-Attached Secure Disks first promoted the concept of splitting less common operations, like namespace manipulations, from common operations, like reads and writes, to optimize the performance and scale of both. [6] In the same year, a Belgian company - FilePool - was established to build the basis for archiving functions. Object storage was proposed at Gibson's Carnegie Mellon University lab as a research project in 1996. [7] Another key concept was abstracting the writes and reads of data to more flexible data containers (objects). Fine grained access control through object storage architecture [8] was further described by one of the NASD team, Howard Gobioff, who later was one of the inventors of the Google File System. [9]

Other related work includes the Coda filesystem project at Carnegie Mellon, which started in 1987, and spawned the Lustre file system. [10] There is also the OceanStore project at UC Berkeley, [11] which started in 1999 [12] and the Logistical Networking project at the University of Tennessee Knoxville, which started in 1998. [13] In 1999, Gibson founded Panasas to commercialize the concepts developed by the NASD team.

Development

Seagate Technology played a central role in the development of object storage. According to the Storage Networking Industry Association (SNIA), "Object storage originated in the late 1990s: Seagate specifications from 1999 Introduced some of the first commands and how operating system effectively removed from consumption of the storage." [14]

A preliminary version of the "OBJECT BASED STORAGE DEVICES Command Set Proposal" dated 10/25/1999 was submitted by Seagate as edited by Seagate's Dave Anderson and was the product of work by the National Storage Industry Consortium (NSIC) including contributions by Carnegie Mellon University, Seagate, IBM, Quantum, and StorageTek. [15] This paper was proposed to INCITS T-10 (International Committee for Information Technology Standards) with a goal to form a committee and design a specification based on the SCSI interface protocol.  This defined objects as abstracted data, with unique identifiers and metadata, how objects related to file systems, along with many other innovative concepts. Anderson presented many of these ideas at the SNIA conference in October 1999.  The presentation revealed an IP Agreement that had been signed in February 1997 between the original collaborators (with Seagate represented by Anderson and Chris Malakapalli) and covered the benefits of object storage, scalable computing, platform independence, and storage management. [16]

Architecture

High level object storage architecture.svg

Abstraction of storage

One of the design principles of object storage is to abstract some of the lower layers of storage away from the administrators and applications. Thus, data is exposed and managed as objects instead of blocks or (exclusively) files. Objects contain additional descriptive properties which can be used for better indexing or management. Administrators do not have to perform lower-level storage functions like constructing and managing logical volumes to utilize disk capacity or setting RAID levels to deal with disk failure.

Object storage also allows the addressing and identification of individual objects by more than just file name and file path. Object storage adds a unique identifier within a bucket, or across the entire system, to support much larger namespaces and eliminate name collisions.

Inclusion of rich custom metadata within the object

Object storage explicitly separates file metadata from data to support additional capabilities. As opposed to fixed metadata in file systems (filename, creation date, type, etc.), object storage provides for full function, custom, object-level metadata in order to:

Additionally, in some object-based file-system implementations:

Object-based storage devices (OSD) as well as some software implementations (e.g., DataCore Swarm) manage metadata and data at the storage device level:

Programmatic data management

Object storage provides programmatic interfaces to allow applications to manipulate data. At the base level, this includes Create, read, update and delete (CRUD) functions for basic read, write and delete operations. Some object storage implementations go further, supporting additional functionality like object/file versioning, object replication, life-cycle management and movement of objects between different tiers and types of storage. Most API implementations are REST-based, allowing the use of many standard HTTP calls.

Implementation

Cloud storage

The vast majority of cloud storage available in the market leverages an object-storage architecture. Some notable examples are Amazon Web Services S3, which debuted in March 2006, Microsoft Azure Blob Storage, Rackspace Cloud Files (whose code was donated in 2010 to Openstack project and released as OpenStack Swift), and Google Cloud Storage released in May 2010.

Object-based file systems

Some distributed file systems use an object-based architecture, where file metadata is stored in metadata servers and file data is stored in object storage servers. File system client software interacts with the distinct servers, and abstracts them to present a full file system to users and applications.

Object-storage systems

Some early incarnations of object storage were used for archiving, as implementations were optimized for data services like immutability, not performance. EMC Centera and Hitachi HCP (formerly known as HCAP) are two commonly cited object storage products for archiving. Another example is Quantum ActiveScale Object Storage Platform.

More general-purpose object-storage systems came to market around 2008. Lured by the incredible growth of "captive" storage systems within web applications like Yahoo Mail and the early success of cloud storage, object-storage systems promised the scale and capabilities of cloud storage, with the ability to deploy the system within an enterprise, or at an aspiring cloud-storage service provider.

Unified file and object storage

A few object-storage systems support Unified File and Object storage, allowing clients to store objects on a storage system while simultaneously other clients store files on the same storage system. [17] Other vendors in the area of Hybrid cloud storage are using Cloud storage gateways to provide a file access layer over object storage, implementing file access protocols such as SMB and NFS.

"Captive" object storage

Some large Internet companies developed their own software when object-storage products were not commercially available or use cases were very specific. Facebook famously invented their own object-storage software, code-named Haystack, to address their particular massive-scale photo management needs efficiently. [18]

Object-based storage devices

Object storage at the protocol and device layer was proposed 20 years ago[ ambiguous ] and approved for the SCSI command set nearly 10 years ago[ ambiguous ] as "Object-based Storage Device Commands" (OSD), [19] however, it had not been put into production until the development of the Seagate Kinetic Open Storage platform. [20] [21] The SCSI command set for Object Storage Devices was developed by a working group of the SNIA for the T10 committee of the International Committee for Information Technology Standards (INCITS). [22] T10 is responsible for all SCSI standards.

Market adoption

One of the first object-storage products, Lustre, is used in 70% of the Top 100 supercomputers and ~50% of the Top 500. [23] As of June 16, 2013, this includes 7 of the top 10, including the current fourth fastest system on the list - China's Tianhe-2 and the seventh fastest, the Titan supercomputer at the Oak Ridge National Laboratory. [24]

Object-storage systems had good adoption in the early 2000s as an archive platform, particularly in the wake of compliance laws like Sarbanes-Oxley. After five years in the market, EMC's Centera product claimed over 3,500 customers and 150 petabytes shipped by 2007. [25] Hitachi's HCP product also claims many petabyte-scale customers. [26] Newer object storage systems have also gotten some traction, particularly around very large custom applications like eBay's auction site, where EMC Atmos is used to manage over 500 million objects a day. [27] As of March 3, 2014, EMC claims to have sold over 1.5 exabytes of Atmos storage. [28] On July 1, 2014, Los Alamos National Lab chose the Scality RING as the basis for a 500-petabyte storage environment, which would be among the largest ever. [29]

"Captive" object storage systems like Facebook's Haystack have scaled impressively. In April 2009, Haystack was managing 60 billion photos and 1.5 petabytes of storage, adding 220 million photos and 25 terabytes a week. [18] Facebook more recently stated that they were adding 350 million photos a day and were storing 240 billion photos. [30] This could equal as much as 357 petabytes. [31]

Cloud storage has become pervasive as many new web and mobile applications choose it as a common way to store binary data. [32] As the storage back-end to many popular applications like Smugmug and Dropbox, Amazon S3 has grown to massive scale, citing over 2-trillion objects stored in April 2013. [33] Two months later, Microsoft claimed that they stored even more objects in Azure at 8.5 trillion. [34] By April 2014, Azure claimed over 20-trillion objects stored. [35] Windows Azure Storage manages Blobs (user files), Tables (structured storage), and Queues (message delivery) and counts them all as objects. [36]

Market analysis

IDC has begun to assess the object-based-storage market annually using its MarketScape methodology. IDC describes the MarketScape as: "...a quantitative and qualitative assessment of the characteristics that assess a vendor's current and future success in the said market or market segment and provide a measure of their ascendancy to become a Leader or maintain a leadership. IDC MarketScape assessments are particularly helpful in emerging markets that are often fragmented, have several players, and lack clear leaders." [37]

In 2019, IDC rated Dell EMC, Hitachi Data Systems, IBM, NetApp, and Scality as leaders.

Standards

Object-based storage device standards

OSD version 1

In the first version of the OSD standard, [38] objects are specified with a 64-bit partition ID and a 64-bit object ID. Partitions are created and deleted within an OSD, and objects are created and deleted within partitions. There are no fixed sizes associated with partitions or objects; they are allowed to grow subject to physical size limitations of the device or logical quota constraints on a partition.

An extensible set of attributes describe objects. Some attributes are implemented directly by the OSD, such as the number of bytes in an object and the modification time of an object. There is a special policy tag attribute that is part of the security mechanism. Other attributes are uninterpreted by the OSD. These are set on objects by the higher-level storage systems that use the OSD for persistent storage. For example, attributes might be used to classify objects, or to capture relationships among different objects stored on different OSDs.

A list command returns a list of identifiers for objects within a partition, optionally filtered by matches against their attribute values. A list command can also return selected attributes of the listed objects.

Read and write commands can be combined, or piggy-backed, with commands to get and set attributes. This ability reduces the number of times a high-level storage system has to cross the interface to the OSD, which can improve overall efficiency.

OSD version 2

A second generation of the SCSI command set, "Object-Based Storage Devices - 2" (OSD-2) added support for snapshots, collections of objects, and improved error handling. [39]

A snapshot is a point-in-time copy of all the objects in a partition into a new partition. The OSD can implement a space-efficient copy using copy-on-write techniques so that the two partitions share objects that are unchanged between the snapshots, or the OSD might physically copy the data to the new partition. The standard defines clones, which are writeable, and snapshots, which are read-only.

A collection is a special kind of object that contains the identifiers of other objects. There are operations to add and delete from collections, and there are operations to get or set attributes for all the objects in a collection. Collections are also used for error reporting. If an object becomes damaged by the occurrence of a media defect (i.e., a bad spot on the disk) or by a software error within the OSD implementation, its identifier is put into a special error collection. The higher-level storage system that uses the OSD can query this collection and take corrective action as necessary.

Differences between key–value and object stores

The border between an object store and a key–value store is blurred, with key–value stores being sometimes loosely referred to as object stores.

A traditional block storage interface uses a series of fixed size blocks which are numbered starting at 0. Data must be that exact fixed size and can be stored in a particular block which is identified by its logical block number (LBN). Later, one can retrieve that block of data by specifying its unique LBN.

With a key–value store, data is identified by a key rather than a LBN. A key might be "cat" or "olive" or "42". It can be an arbitrary sequence of bytes of arbitrary length. Data (called a value in this parlance) does not need to be a fixed size and also can be an arbitrary sequence of bytes of arbitrary length. One stores data by presenting the key and data (value) to the data store and can later retrieve the data by presenting the key. This concept is seen in programming languages. Python calls them dictionaries, Perl calls them hashes, Java, Rust and C++ call them maps, etc. Several data stores also implement key–value stores such as Memcached, Redis and CouchDB.

Object stores are similar to key–value stores in two respects. First, the object identifier or URL (the equivalent of the key) can be an arbitrary string. [40] Second, data may be of an arbitrary size.

There are, however, a few key differences between key–value stores and object stores. First, object stores also allow one to associate a limited set of attributes (metadata) with each piece of data. The combination of a key, value, and set of attributes is referred to as an object. Second, object stores are optimized for large amounts of data (hundreds of megabytes or even gigabytes), whereas for key–value stores the value is expected to be relatively small (kilobytes). Finally, object stores usually offer weaker consistency guarantees such as eventual consistency, whereas key–value stores offer strong consistency.

See also

Related Research Articles

XFS is a high-performance 64-bit journaling file system created by Silicon Graphics, Inc (SGI) in 1993. It was the default file system in SGI's IRIX operating system starting with its version 5.3. XFS was ported to the Linux kernel in 2001; as of June 2014, XFS is supported by most Linux distributions; Red Hat Enterprise Linux uses it as its default file system.

Disk formatting is the process of preparing a data storage device such as a hard disk drive, solid-state drive, floppy disk, memory card or USB flash drive for initial use. In some cases, the formatting operation may also create one or more new file systems. The first part of the formatting process that performs basic medium preparation is often referred to as "low-level formatting". Partitioning is the common term for the second part of the process, dividing the device into several sub-devices and, in some cases, writing information to the device allowing an operating system to be booted from it. The third part of the process, usually termed "high-level formatting" most often refers to the process of generating a new file system. In some operating systems all or parts of these three processes can be combined or repeated at different levels and the term "format" is understood to mean an operation in which a new disk medium is fully prepared to store files. Some formatting utilities allow distinguishing between a quick format, which does not erase all existing data and a long option that does erase all existing data.

Quantum Corporation is a data storage, management, and protection company that provides technology to store, manage, archive, and protect video and unstructured data throughout the data life cycle. Their products are used by enterprises, media and entertainment companies, government agencies, big data companies, and life science organizations. Quantum is headquartered in San Jose, California and has offices around the world, supporting customers globally in addition to working with a network of distributors, VARs, DMRs, OEMs and other suppliers.

NetApp, Inc. is an American data infrastructure company that provides unified data storage, integrated data services, and cloud operations (CloudOps) solutions to enterprise customers. The company is based in Cork City, Ireland. It has ranked in the Fortune 500 from 2012 to 2021. Founded in 1992 with an initial public offering in 1995, NetApp offers cloud data services for management of applications and data both online and physically.

<span class="mw-page-title-main">File system</span> Computer filing system

In computing, a file system or filesystem governs file organization and access. A local file system is a capability of an operating system that services the applications running on the same computer. A distributed file system is a protocol that provides file access between networked computers.

Lustre is a type of parallel distributed file system, generally used for large-scale cluster computing. The name Lustre is a portmanteau word derived from Linux and cluster. Lustre file system software is available under the GNU General Public License and provides high performance file systems for computer clusters ranging in size from small workgroup clusters to large-scale, multi-site systems. Since June 2005, Lustre has consistently been used by at least half of the top ten, and more than 60 of the top 100 fastest supercomputers in the world, including the world's No. 1 ranked TOP500 supercomputer in November 2022, Frontier, as well as previous top supercomputers such as Fugaku, Titan and Sequoia.

<span class="mw-page-title-main">Data (computer science)</span> Quantities, characters, or symbols on which operations are performed by a computer

In computer science, data is any sequence of one or more symbols; datum is a single symbol of data. Data requires interpretation to become information. Digital data is data that is represented using the binary number system of ones (1) and zeros (0), instead of analog representation. In modern (post-1960) computer systems, all data is digital.

Content-addressable storage (CAS), also referred to as content-addressed storage or fixed-content storage, is a way to store information so it can be retrieved based on its content, not its name or location. It has been used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations. Content-addressable storage is similar to content-addressable memory.

Apache Hadoop is a collection of open-source software utilities for reliable, scalable, distributed computing. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use. It has since also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

Cloud storage is a model of computer data storage in which data, said to be on "the cloud", is stored remotely in logical pools and is accessible to users over a network, typically the Internet. The physical storage spans multiple servers, and the physical environment is typically owned and managed by a cloud computing provider. These cloud storage providers are responsible for keeping the data available and accessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data.

Ceph is a free and open-source software-defined storage platform that provides object storage, block storage, and file storage built on a common distributed cluster foundation. Ceph provides distributed operation without a single point of failure and scalability to the exabyte level. Since version 12 (Luminous), Ceph does not rely on any other conventional filesystem and directly manages HDDs and SSDs with its own storage backend BlueStore and can expose a POSIX filesystem.

A cloud storage gateway is a hybrid cloud storage device, implemented in hardware or software, which resides at the customer premises and translates cloud storage APIs such as SOAP or REST to block-based storage protocols such as iSCSI or Fibre Channel or file-based interfaces such as NFS or SMB.

InfinityDB is an all-Java embedded database engine and client/server DBMS with an extended java.util.concurrent.ConcurrentNavigableMap interface that is deployed in handheld devices, on servers, on workstations, and in distributed settings. The design is based on a proprietary lockless, concurrent, B-tree architecture that enables client programmers to reach high levels of performance without risk of failures.

EMC Atmos is a cloud storage services platform developed by EMC Corporation. Atmos can be deployed as either a hardware appliance or as software in a virtual environment. The Atmos technology uses an object storage architecture designed to manage petabytes of information and billions of objects across multiple geographic locations as a single system.

Data defined storage is a marketing term for managing, protecting, and realizing the value from data by combining application, information and storage tiers.

Scality is a global technology provider of software-defined storage (SDS) solutions, specializing in distributed file and object storage with cloud data management. Scality maintains offices in Paris (France), London (UK), San Francisco and Washington DC (USA), and Tokyo (Japan) and has employees in 14 countries.

<span class="mw-page-title-main">Data lake</span> System or repository of data stored in its natural/raw format

A data lake is a system or repository of data stored in its natural/raw format, usually object blobs or files. A data lake is usually a single store of data including raw copies of source system data, sensor data, social data etc., and transformed data used for tasks such as reporting, visualization, advanced analytics, and machine learning. A data lake can include structured data from relational databases, semi-structured data, unstructured data, and binary data. A data lake can be established on premises or in the cloud.

Nirvana was virtual object storage software developed and maintained by General Atomics.

EMC Elastic Cloud Storage (ECS), formerly Project Nile, is an object storage software product marketed by EMC Corporation. It is marketed as software-defined storage that follows several principles of object storage, such as scalability, data resilience, and cost efficiency.

References

  1. Mesnier, M.; Ganger, G.R.; Riedel, E. (August 2003). "Storage area networking - Object-based storage". IEEE Communications Magazine . 41 (8): 84–90. doi:10.1109/mcom.2003.1222722.
  2. Porter De Leon, Yadin; Tony Piscopo (14 August 2014). "Object Storage versus Block Storage: Understanding the Technology Differences". Druva . Retrieved 19 January 2015.
  3. 1 2 Erwin, Derek (2022). "Block Storage vs. Object Storage vs. File Storage: What's the Difference?". Qumulo . Retrieved 8 February 2022. Object storage can work well for unstructured data in which data is written once and read once (or many times). Static online content, data backups, image archives, videos, pictures, and music files can be stored as objects.
  4. Chandrasekaran, Arun; Dayley, Alan (11 February 2014). "Critical Capabilities for Object Storage". Gartner Research . Archived from the original on 2014-03-16.
  5. Starkey, James (1997-01-22). "The true story of BLOBs" . Retrieved 2023-11-08.
  6. Garth A. Gibson; Nagle D.; Amiri K.; Chan F.; Feinberg E.; Gobioff H.; Lee C.; Ozceri B.; Riedel E.; Rochberg D.; Zelenka J. "File Server Scaling with Network-Attached Secure Disks" (PDF). Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics '97). Retrieved 27 October 2013.
  7. Factor, Michael; Meth, K.; Naor, D.; Rodeh, O.; Satran, J. (2005). "Object Storage: The Future Building Block for Storage Systems". pp. 119–123. CiteSeerX   10.1.1.122.3959 .
  8. Gobioff, Howard; Gibson, Garth A.; Tygar, Doug (1 October 1997). "Security for Network Attached Storage Devices (CMU-CS-97-185)". Parallel Data Laboratory. Retrieved 7 November 2013.
  9. Sanjay Ghemawat; Howard Gobioff; Shun-Tak Leung (October 2003). "The Google File System" (PDF). Retrieved 7 November 2013.
  10. Braam, Peter. "Lustre: The intergalactic file system" (PDF). Retrieved 17 September 2013.
  11. "OceanStore". Archived from the original on 8 August 2012. Retrieved 18 September 2013.
  12. Kubiatowicz, John; Wells, Chris; Zhao, Ben; Bindel, David; Chen, Yan; Czerwinski, Steven; Eaton, Patrick; Geels, Dennis; Gummadi, Ramakrishna; Rhea, Sean; Weatherspoon, Hakim (2000). "OceanStore: An architecture for global-scale persistent storage". Proceedings of the ninth international conference on Architectural support for programming languages and operating systems. pp. 190–201. doi:10.1145/378993.379239. ISBN   1581133170.
  13. Plank, James; Beck, Micah; Elwasif, Wael; Moore, Terry; Swany, Martin; Wolski, Rich (October 1999). "The Internet Backplane Protocol: Storage in the Network" (PDF). Netstore 1999. Retrieved 27 January 2021.
  14. Object Storage: What, How and Why?, NSF (Networking Storage Forum), SNIA (Storage Networking Industry Association), Live Webcast February 19, 2020
  15. Anderson, D. (1999). "Object based storage devices: a command set proposal" (PDF). T10. S2CID   59781155.
  16. Object Based Storage: A Vision, slide presentation, Dave Anderson and Seagate Technology, October 13, 1999 https://www.t10.org/ftp/t10/document.99/99-341r0.pdf
  17. Pritchard, Stephen (23 October 2020). "Unified file and object storage: The best of both worlds?". Computer Weekly.
  18. 1 2 Vajgel, Peter (30 April 2009). "Needle in a haystack: efficient storage of billions of photos" . Retrieved 5 October 2021.
  19. Riedel, Erik; Sami Iren (February 2007). "Object Storage and Applications" (PDF). Retrieved 3 November 2013.
  20. "The Seagate Kinetic Open Storage Vision". Seagate. Retrieved 3 November 2013.
  21. Gallagher, Sean (27 October 2013). "Seagate introduces a new drive interface: Ethernet". Ars Technica . Retrieved 3 November 2013.
  22. Corbet, Jonathan (4 November 2008). "Linux and object storage devices". LWN.net. Retrieved 8 November 2013.
  23. Dilger, Andreas. "Lustre Future Development" (PDF). IEEE MSST. Archived from the original (PDF) on 29 October 2013. Retrieved 27 October 2013.
  24. "Datadirect Networks to build world's fastest storage system for Titan, the world's most powerful supercomputer". Archived from the original on 29 October 2013. Retrieved 27 October 2013.
  25. "EMC Marks Five Years of EMC Centera Innovation and Market Leadership". EMC. 18 April 2007. Retrieved 3 November 2013.
  26. "Hitachi Content Platform Supports Multiple Petabytes, Billions of Objects". Techvalidate.com. Archived from the original on 24 September 2015. Retrieved 19 September 2013.
  27. Robb, Drew (11 May 2011). "EMC World Continues Focus on Big Data, Cloud and Flash". Infostor. Retrieved 19 September 2013.
  28. Hamilton, George. "In it for the Long Run: EMC's Object Storage Leadership". Archived from the original on 15 March 2014. Retrieved 15 March 2014.
  29. Mellor, Chris (1 July 2014). "Los Alamos National Laboratory likes it, puts Scality's RING on it". The Register . Retrieved 26 January 2015.
  30. Miller, Rich (13 January 2013). "Facebook Builds Exabyte Data Centers for Cold Storage". Datacenterknowledge.com. Retrieved 6 November 2013.
  31. Leung, Leo (17 May 2014). "How much data does x store?". Techexpectations.org. Archived from the original on 22 May 2014. Retrieved 23 May 2014.
  32. Leung, Leo (January 11, 2012). "Object storage already dominates our days (we just didn't notice)". Archived from the original on 29 September 2013. Retrieved 27 October 2013.
  33. Harris, Derrick (18 April 2013). "Amazon S3 goes exponential, now stores 2 trillion objects". Gigaom . Retrieved 17 September 2013.
  34. Wilhelm, Alex (27 June 2013). "Microsoft: Azure powers 299M Skype users, 50M Office Web Apps users, stores 8.5T objects". thenextweb.com . Retrieved 18 September 2013.
  35. Nelson, Fritz (4 April 2014). "Microsoft Azure's 44 New Enhancements, 20 Trillion Objects". Tom's IT Pro. Archived from the original on 6 May 2014. Retrieved 3 September 2014.
  36. Calder, Brad. "Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency" (PDF). SOSP '11: Proceedings of the Twenty-third ACM SIGOPS Symposium on Operating Systems Principles. Association for Computing Machinery. ISBN   978-1-4503-0977-6 . Retrieved 6 November 2013.
  37. Potnis, Amita. "IDC MarketScape: Worldwide Object-Based Storage 2019 Vendor Assessment". idc.com. IDC. Retrieved 16 Feb 2020.
  38. "INCITS 400-2004". InterNational Committee for Information Technology Standards . Retrieved 8 November 2013.
  39. "INCITS 458-2011". InterNational Committee for Information Technology Standards. 15 March 2011. Retrieved 8 November 2013.
  40. OpenStack Foundation. "Object Storage API overview". OpenStack Documentation. Retrieved 9 June 2017.