The technical content of this article relies largely or entirely on documentation from Amazon.com.(November 2018) |
![]() | |
Type of site | Cloud storage |
---|---|
Available in | English |
Owner | Amazon.com |
URL | aws |
IPv6 support | Yes |
Commercial | Yes |
Registration | Required (included in free tier layer) |
Launched | March 14, 2006 |
Current status | Active |
Amazon Simple Storage Service (S3) is a service offered by Amazon Web Services (AWS) that provides object storage through a web service interface. [1] [2] Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its e-commerce network. [3] Amazon S3 can store any type of object, which allows uses like storage for Internet applications, backups, disaster recovery, data archives, data lakes for analytics, and hybrid cloud storage. AWS launched Amazon S3 in the United States on March 14, 2006, [1] [4] then in Europe in November 2007. [5]
Amazon S3 manages data with an object storage architecture [6] which aims to provide scalability, high availability, and low latency with high durability. [3] The basic storage units of Amazon S3 are objects which are organized into buckets. Each object is identified by a unique, user-assigned key. [7] Buckets can be managed using the console provided by Amazon S3, programmatically with the AWS SDK, or the REST application programming interface. Objects can be up to five terabytes in size. [8] [9] Requests are authorized using an access control list associated with each object bucket and support versioning [10] which is disabled by default. [11] Since buckets are typically the size of an entire file system mount in other systems, this access control scheme is very coarse-grained. In other words, unique access controls cannot be associated with individual files.[ citation needed ] Amazon S3 can be used to replace static web-hosting infrastructure with HTTP client-accessible objects, [12] index document support, and error document support. [13] The Amazon AWS authentication mechanism allows the creation of authenticated URLs, valid for a specified amount of time. Every item in a bucket can also be served as a BitTorrent feed. The Amazon S3 store can act as a seed host for a torrent and any BitTorrent client can retrieve the file. This can drastically reduce the bandwidth cost for the download of popular objects. A bucket can be configured to save HTTP log information to a sibling bucket; this can be used in data mining operations. [14] There are various User Mode File System (FUSE)–based file systems for Unix-like operating systems (for example, Linux) that can be used to mount an S3 bucket as a file system. The semantics of the Amazon S3 file system are not that of a POSIX file system, so the file system may not behave entirely as expected. [15]
Amazon S3 offers nine different storage classes with different levels of durability, availability, and performance requirements. [16]
The Amazon S3 Glacier storage classes above are distinct from Amazon Glacier, which is a separate product with its own APIs.
An object in S3 can be between 0 bytes and 5TB. If an object is larger than 5TB, it must be divided into chunks prior to uploading. When uploading, Amazon S3 allows a maximum of 5GB in a single upload operation; hence, objects larger than 5GB must be uploaded via the S3 multipart upload API. [18]
The broad adoption of Amazon S3 and related tooling has given rise to competing services based on the S3 API. These services use the standard programming interface but are differentiated by their underlying technologies and business models. [29] A standard interface enables better competition from rival providers and allows economies of scale in implementation, among other benefits. [30]
Amazon Web Services introduced Amazon S3 in 2006. [31] [32]
Date | Number of Items Stored |
---|---|
October 2007 | 10 billion [33] |
January 2008 | 14 billion [33] |
October 2008 | 29 billion [34] |
March 2009 | 52 billion [35] |
August 2009 | 64 billion [36] |
March 2010 | 102 billion [37] |
April 2013 | 2 trillion [38] |
March 2021 | 100 trillion [39] |
March 2023 | 280 trillion [40] |
November 2024 | 400 trillion [40] |
In November 2017 AWS added default encryption capabilities at bucket level. [41]
Amazon S3 provides a durability guarantee of 99.999999999% (referred to as "11 nines"), primarily addressing data loss from hardware failures. However, this guarantee does not extend to losses resulting from human errors (such as accidental deletion), misconfigurations, third-party failures and subsequent data corruptions, natural disasters, force majeure events, or security breaches. Customers are responsible for monitoring SLA compliance and must submit claims for any unmet SLAs within a designated timeframe. They should understand how deviations from SLAs are calculated, as these parameters may differ from those of other AWS services. These requirements can impose a significant burden on customers. Additionally, SLA percentages and conditions can vary from those of other AWS services. In cases of data loss due to hardware failure attributable to Amazon, the company does not provide monetary compensation; instead, affected users may receive credits if they meet the eligibility criteria. [42] [43] [44] [45] [46]
{{cite book}}
: CS1 maint: others (link)