Usage of flash memory | |
---|---|
Introduced by: | SanDisk |
Introduction date: | 1991 |
Capacity: | 20 MB (2.5-in form factor) |
Original concept | |
By: | Storage Technology Corporation |
Conceived: | 1978 |
Capacity: | 45 MB |
As of 2024 [update] | |
Capacity: | Up to 200 TB[ citation needed ] |
A solid-state drive (SSD) is a type of solid-state storage device that uses integrated circuits to store data persistently. It is sometimes called semiconductor storage device, solid-state device, or solid-state disk. [1] [2]
SSDs rely on non-volatile memory, typically NAND flash, to store data in memory cells. The performance and endurance of SSDs vary depending on the number of bits stored per cell, ranging from high-performing single-level cells (SLC) to more affordable but slower quad-level cells (QLC). In addition to flash-based SSDs, other technologies such as 3D XPoint offer faster speeds and higher endurance through different data storage mechanisms.
Unlike traditional hard disk drives (HDDs), SSDs have no moving parts, allowing them to deliver faster data access speeds, reduced latency, increased resistance to physical shock, lower power consumption, and silent operation.
Often interfaced to a system in the same way as HDDs, SSDs are used in a variety of devices, including personal computers, enterprise servers, and mobile devices. However, SSDs are generally more expensive on a per-gigabyte basis and have a finite number of write cycles, which can lead to data loss over time. Despite these limitations, SSDs are increasingly replacing HDDs, especially in performance-critical applications and as primary storage in many consumer devices.
SSDs come in various form factors and interface types, including SATA, PCIe, and NVMe, each offering different levels of performance. Hybrid storage solutions, such as solid-state hybrid drives (SSHDs), combine SSD and HDD technologies to offer improved performance at a lower cost than pure SSDs.
An SSD stores data in semiconductor cells, with its properties varying according to the number of bits stored in each cell (between 1 and 4). Single-level cells (SLC) store one bit of data per cell and provide higher performance and endurance. In contrast, multi-level cells (MLC), triple-level cells (TLC), and quad-level cells (QLC) store more data per cell but have lower performance and endurance. SSDs using 3D XPoint technology, such as Intel’s Optane, store data by changing electrical resistance instead of storing electrical charges in cells, which can provide faster speeds and longer data persistence compared to conventional flash memory. [3] SSDs based on NAND flash slowly leak charge when not powered, while heavily-used consumer drives may start losing data typically after one to two year in storage. [4] SSDs have a limited lifetime number of writes, and also slow down as they reach their full storage capacity.
SSDs also have internal parallelism that allows them to manage multiple operations simultaneously, which enhances their performance. [5]
Unlike HDDs and similar electromechanical magnetic storage, SSDs do not have moving mechanical parts, which provides advantages such as resistance to physical shock, quieter operation, and faster access times. Their lower latency results in higher input/output rates (IOPS) than HDDs. [6]
Some SSDs are combined with traditional hard drives in hybrid configurations, such as Intel's Hystor and Apple's Fusion Drive. These drives use both flash memory and spinning magnetic disks in order to improve the performance of frequently-accessed data. [7] [8]
Traditional interfaces (e.g. SATA and SAS) and standard HDD form factors allow such SSDs to be used as drop-in replacements for HDDs in computers and other devices. Newer form factors such as mSATA, M.2, U.2, NF1/M.3/NGSFF, [9] [10] XFM Express ( Crossover Flash Memory , form factor XT2) [11] and EDSFF [12] [13] and higher speed interfaces such as NVM Express (NVMe) over PCI Express (PCIe) can further increase performance over HDD performance. [3]
Traditional HDD benchmarks tend to focus on the performance characteristics such as rotational latency and seek time. As SSDs do not need to spin or seek to locate data, they are vastly superior to HDDs in such tests. However, SSDs have challenges with mixed reads and writes, and their performance may degrade over time. Therefore, SSD testing typically looks at when the full drive is first used, as the new and empty drive may have much better write performance than it would show after only weeks of use. [14]
The reliability of both HDDs and SSDs varies greatly among models. [15] Some field failure rates indicate that SSDs are significantly more reliable than HDDs. [16] [17] However, SSDs are sensitive to sudden power interruption, sometimes resulting in aborted writes or even cases of the complete loss of the drive. [18]
Most of the advantages of solid-state drives over traditional hard drives are due to their ability to access data completely electronically instead of electromechanically, resulting in superior transfer speeds and mechanical ruggedness. [19] On the other hand, hard disk drives offer significantly higher capacity for their price. [6] [20]
In traditional HDDs, a rewritten file will generally occupy the same location on the disk surface as the original file, whereas in SSDs the new copy will often be written to different NAND cells for the purpose of wear leveling. The wear-leveling algorithms are complex and difficult to test exhaustively. As a result, one major cause of data loss in SSDs is firmware bugs. [21] [22]
Attribute or characteristic | Solid-state drive (SSD) | Hard disk drive (HDD) |
---|---|---|
Price per capacity | SSDs are generally more expensive than HDDs and are expected to remain so. As of early 2018, SSD prices were around $0.30 per gigabyte for 4 TB models. [23] | HDDs, as of early 2018, were priced around $0.02 to $0.03 per gigabyte for 1 TB models. [23] |
Storage capacity | By 2018, SSDs were available in sizes up to 100 TB, [24] though lower-cost models typically ranged from 120 GB to 512 GB. | HDDs of up to 30 TB were available by 2023. [25] |
Reliability – data retention | Worn-out SSDs may start losing data after as little as three months without power, especially at high temperatures. [4] Newer SSDs, depending on usage, may retain data longer. SSDs are generally not suited for long-term archival storage. [26] | HDDs, when stored in a cool, dry environment, can retain data for longer periods without power. However, over time, mechanical parts may fail, such as the inability to spin up after prolonged storage. |
Reliability – longevity | SSDs lack mechanical parts, theoretically making them more reliable than HDDs. However, SSD cells wear out after a limited number of writes. Controllers help mitigate this, allowing for many years of use under normal conditions. [27] | HDDs have moving parts prone to mechanical wear, but the storage medium (magnetic platters) does not degrade from read/write cycles. Studies have suggested HDDs may last 9–11 years. [28] |
Start-up time | SSDs are nearly instantaneous, with no mechanical parts to prepare. | HDDs require several seconds to spin up before data can be accessed. [29] |
Sequential-access performance | Consumer SSDs offer transfer rates between 200 MB/s and 3500 MB/s, depending on the model. [30] | HDDs transfer data at approximately 200 MB/s, depending on the rotational speed and location of data on the disk. Outer tracks allow faster transfer rates. [31] |
Random-access performance | SSD random access times are typically below 0.1 ms. [32] | HDD random access times range from 2.9 ms (high-end) to 12 ms (laptop HDDs). [33] |
Power consumption | High-performance SSDs use about half to a third of the power required by HDDs. [34] | HDDs use between 2 and 5 watts for 2.5-inch drives, while high-performance 3.5-inch drives can require up to 20 watts. [35] |
Acoustic noise | SSDs have no moving parts and are silent. Some SSDs may produce a high-pitched noise during block erasure. [36] | HDDs generate noise from spinning disks and moving heads, which can vary based on the drive's speed. |
Temperature control | SSDs generally tolerate higher operating temperatures and do not require special cooling. [37] | HDDs need cooling in high-temperature environments (above 35 °C (95 °F)) to avoid reliability issues. [38] |
While both memory cards and most SSDs use flash memory, they have very different characteristics, including power consumption, performance, size, and reliability. [39] Originally, solid state drives were shaped and mounted in the computer like hard drives. [39] In contrast, memory cards (such as Secure Digital (SD), CompactFlash (CF), and many others) were originally designed for digital cameras and later found their way into cell phones, gaming devices, GPS units, etc. Most memory cards are physically smaller than SSDs, and designed to be inserted and removed repeatedly. [39]
SSDs have different failure modes from traditional magnetic hard drives. Because solid-state drives contain no moving parts, they are generally not subject to mechanical failures. However, other types of failures can occur. For example, incomplete or failed writes due to sudden power loss may be more problematic than with HDDs, and the failure of a single chip may result in the loss of all data stored on it. Nonetheless, studies indicate that SSDs are generally reliable, often exceed their manufacturer-stated lifespan [40] [41] and having lower failure rates than HDDs. [40] However, studies also note that SSDs experience higher rates of uncorrectable errors, which can lead to data loss, compared to HDDs. [42]
The endurance of an SSD is typically listed on its datasheet in one of two forms:
For example, a Samsung 970 EVO NVMe M.2 SSD (2018) with 1 TB of capacity has an endurance rating of 600 TBW. [44]
Recovering data from SSDs presents challenges due to the non-linear and complex nature of data storage in solid-state drives. The internal operations of SSDs vary by manufacturer, with commands (e.g. TRIM and the ATA Secure Erase) and programs like (e.g. hdparm) being able to erase and modify the bits of a deleted file.
The JEDEC Solid State Technology Association (JEDEC) has established standards for SSD reliability metrics, which include: [45]
In a distributed computing environment, SSDs can be used as a distributed cache layer that temporarily absorbs the large volume of user requests to slower HDD-based backend storage systems. This layer provides much higher bandwidth and lower latency than the storage system would, and can be managed in a number of forms, such as a distributed key-value database and a distributed file system. On supercomputers, this layer is typically referred to as burst buffer.
Flash-based solid-state drives can be used to create network appliances from general-purpose personal computer hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware.[ citation needed ]
SSDs based on an SD card with a live SD operating system are easily write-locked. Combined with a cloud computing environment or other writable medium, an OS booted from a write-locked SD card is reliable, persistent and impervious to permanent corruption.
In 2011, Intel introduced a caching mechanism for their Z68 chipset (and mobile derivatives) called Smart Response Technology, which allows a SATA SSD to be used as a cache (configurable as write-through or write-back) for a conventional, magnetic hard disk drive. [46] A similar technology is available on HighPoint's RocketHybrid PCIe card. [47]
Solid-state hybrid drives (SSHDs) are based on the same principle, but integrate some amount of flash memory on board of a conventional drive instead of using a separate SSD. The flash layer in these drives can be accessed independently from the magnetic storage by the host using ATA-8 commands, allowing the operating system to manage it. For example, Microsoft's ReadyDrive technology explicitly stores portions of the hibernation file in the cache of these drives when the system hibernates, making the subsequent resume faster. [48]
Dual-drive hybrid systems are combining the usage of separate SSD and HDD devices installed in the same computer, with overall performance optimization managed by the computer user, or by the computer's operating system software. Examples of this type of system are bcache and dm-cache on Linux, [49] and Apple's Fusion Drive.
The primary components of an SSD are the controller and the memory used to store data. Traditionally, early SSDs used volatile DRAM for storage, but since 2009, most SSDs utilize non-volatile NAND flash memory, which retains data even when powered off. [50] [3] Flash memory SSDs store data in metal–oxide–semiconductor (MOS) integrated circuit chips, using non-volatile floating-gate memory cells. [51]
Every SSD includes a controller, which manages the data flow between the NAND memory and the host computer. The controller is an embedded processor that runs firmware to optimize performance, managing data, and ensuring data integrity. [52] [53]
Some of the primary functions performed by the controller are:
The overall performance of an SSD can scale with the number of parallel NAND chips and the efficiency of the controller. For example, controllers that enable parallel processing of NAND flash chips can improve bandwidth and reduce latency. [55]
Micron and Intel pioneered faster SSDs by implementing techniques such as data striping and interleaving to enhance read/write speeds. [56] More recently, SandForce introduced controllers that incorporate data compression to reduce the amount of data written to the flash memory, potentially increasing both performance and endurance. [57]
Wear leveling is a technique used in SSDs to ensure that write and erase operations are distributed evenly across all blocks of the flash memory. Without this, specific blocks could wear out prematurely due to repeated use, reducing the overall lifespan of the SSD. The process moves data that is infrequently changed (cold data) from heavily used blocks, so that data that changes more frequently (hot data) can be written to those blocks. This helps distribute wear more evenly across the entire SSD. However, this process introduces additional writes, known as write amplification, which must be managed to balance performance and durability. [58] [59]
Comparison characteristics | MLC : SLC | NAND : NOR |
---|---|---|
Persistence ratio | 1 : 10 | 1 : 10 |
Sequential write ratio | 1 : 3 | 1 : 4 |
Sequential read ratio | 1 : 1 | 1 : 5 |
Price ratio | 1 : 1.3 |
Most SSDs use non-volatile NAND flash memory for data storage, primarily due to its cost-effectiveness and ability to retain data without a constant power supply. NAND flash-based SSDs store data in semiconductor cells, with the specific architecture influencing performance, endurance, and cost. [61]
There are various types of NAND flash memory, categorized by the number of bits stored in each cell:
Over time, SSD controllers have improved the efficiency of NAND flash, incorporating techniques such as interleaved memory, advanced error correction, and wear leveling to optimize performance and extend the lifespan of the drive. [63] [64] [65] [66] [67] Lower-end SSDs often use QLC or TLC memory, while higher-end drives for enterprise or performance-critical applications may use MLC or SLC. [68]
In addition to the flat (planar) NAND structure, many SSDs now use 3D NAND (or V-NAND), where memory cells are stacked vertically, increasing storage density while improving performance and reducing costs. [69]
Some SSDs use volatile DRAM instead of NAND flash, offering very high-speed data access but requiring a constant power supply to retain data. DRAM-based SSDs are typically used in specialized applications where performance is prioritized over cost or non-volatility. Many SSDs, such as NVDIMM devices, are equipped with backup power sources such as internal batteries or external AC/DC adapters. These power sources ensure data is transferred to a backup system (usually NAND flash or another storage medium) in the event of power loss, preventing data corruption or loss. [70] [71] Similarly, ULLtraDIMM devices use components designed for DIMM modules, but only use flash memory, similar to a DRAM SSD. [72]
DRAM-based SSDs are often used for tasks where data must be accessed at high speeds with low latency, such as in high-performance computing or certain server environments. [73]
3D XPoint is a type of non-volatile memory technology developed by Intel and Micron, announced in 2015. [74] It operates by changing the electrical resistance of materials in its cells, offering much faster access times than NAND flash. 3D XPoint-based SSDs, such as Intel’s Optane drives, provide lower latency and higher endurance than NAND-based drives, although they are more expensive per gigabyte. [75] [76]
Drives known as hybrid drives or solid-state hybrid drives (SSHDs) use a hybrid of spinning disks and flash memory. [77] [78] Some SSDs use magnetoresistive random-access memory (MRAM) for storing data. [79] [80]
Many flash-based SSDs include a small amount of volatile DRAM as a cache, similar to the buffers in hard disk drives. This cache can temporarily hold data while it is being written to the flash memory, and it also stores metadata such as the mapping of logical blocks to physical locations on the SSD. [81]
Some SSD controllers, like those from SandForce, achieve high performance without using an external DRAM cache. These designs rely on other mechanisms, such as on-chip SRAM, to manage data and minimize power consumption. [82]
Additionally, some SSDs use an SLC cache mechanism to temporarily store data in single-level cell (SLC) mode, even on multi-level cell (MLC) or triple-level cell (TLC) SSDs. This improves write performance by allowing data to be written to faster SLC storage before being moved to slower, higher-capacity MLC or TLC storage. [83]
On NVMe SSDs, Host Memory Buffer (HMB) technology allows the SSD to use a portion of the system’s DRAM instead of relying on a built-in DRAM cache, reducing costs while maintaining a high level of performance. [82]
In certain high-end consumer and enterprise SSDs, larger amounts of DRAM are included to cache both file table mappings and written data, reducing write amplification and enhances overall performance. [84]
Higher-performing SSDs may include a capacitor or battery, which helps preserve data integrity in the event of an unexpected power loss. The capacitor or battery provides enough power to allow the data in the cache to be written to the non-volatile memory, ensuring no data is lost. [85] [86]
In some SSDs that use multi-level cell (MLC) flash memory, a potential issue known as "lower page corruption" can occur if power is lost while programming an upper page. This can result in previously written data becoming corrupted. To address this, some high-end SSDs incorporate supercapacitors to ensure all data can be safely written during a sudden power loss. [87]
Some consumer SSDs have built-in capacitors to save critical data such as the Flash Translation Layer (FTL) mapping table. Examples include the Crucial M500 and Intel 320 series. [88] Enterprise-class SSDs, such as the Intel DC S3700 series, often come with more robust power-loss protection mechanisms like supercapacitors or batteries. [89]
The host interface of an SSD refers to the physical connector and the signaling methods used to communicate between the SSD and the host system. This interface is managed by the SSD's controller and is often similar to those found in traditional hard disk drives (HDDs). Common interfaces include:
SSDs may support various logical interfaces, which define the command sets used by operating systems to communicate with the SSD. Two common logical interfaces include:
The size and shape of any device are largely driven by the size and shape of the components used to make that device. Traditional HDDs and optical drives are designed around the rotating platter(s) or optical disc along with the spindle motor inside. Since an SSD is made up of various interconnected integrated circuits (ICs) and an interface connector, its shape is no longer limited to the shape of rotating media drives. Some solid-state storage solutions come in a larger chassis that may even be a rack-mount form factor with numerous SSDs inside. They would all connect to a common bus inside the chassis and connect outside the box with a single connector. [3]
For general computer use, the 2.5-inch form factor (typically found in laptops and used for most SATA SSDs) is the most popular, in three thicknesses [98] (7.0mm, 9.5mm, 14.8 or 15.0mm; with 12.0mm also available for some models). For desktop computers with 3.5-inch hard disk drive slots, a simple adapter plate can be used to make such a drive fit. Other types of form factors are more common in enterprise applications. An SSD can also be completely integrated in the other circuitry of the device, as in the Apple MacBook Air (starting with the fall 2010 model). [99] As of 2014 [update] , mSATA and M.2 form factors also gained popularity, primarily in laptops.
The benefit of using a current HDD form factor would be to take advantage of the extensive infrastructure already in place to mount and connect the drives to the host system. [3] [100] These traditional form factors are known by the size of the rotating media (i.e., 5.25-inch, 3.5-inch, 2.5-inch or 1.8-inch) and not the dimensions of the drive casing.
For applications where space is at a premium, like for ultrabooks or tablet computers, a few compact form factors were standardized for flash-based SSDs.
There is the mSATA form factor, which uses the PCI Express Mini Card physical layout. It remains electrically compatible with the PCI Express Mini Card interface specification while requiring an additional connection to the SATA host controller through the same connector.
M.2 form factor, formerly known as the Next Generation Form Factor (NGFF), is a natural transition from the mSATA and physical layout it used, to a more usable and more advanced form factor. While mSATA took advantage of an existing form factor and connector, M.2 has been designed to maximize usage of the card space, while minimizing the footprint. The M.2 standard allows both SATA and PCI Express SSDs to be fitted onto M.2 modules. [101]
Some high performance, high capacity drives uses standard PCI Express add-in card form factor to house additional memory chips, permit the use of higher power levels, and allow the use of a large heat sink. There are also adapter boards that converts other form factors, especially M.2 drives with PCIe interface, into regular add-in cards.
A disk-on-a-module (DOM) is a flash drive with either 40/44-pin Parallel ATA (PATA) or SATA interface, intended to be plugged directly into the motherboard and used as a computer hard disk drive (HDD). DOM devices emulate a traditional hard disk drive, resulting in no need for special drivers or other specific operating system support. DOMs are usually used in embedded systems, which are often deployed in harsh environments where mechanical HDDs would simply fail, or in thin clients because of small size, low power consumption, and silent operation.
As of 2016, [update] storage capacities range from 4 MB to 128 GB with different variations in physical layouts, including vertical or horizontal orientation.[ citation needed ]
Many of the DRAM-based solutions use a box that is often designed to fit in a rack-mount system. The number of DRAM components required to get sufficient capacity to store the data along with the backup power supplies requires a larger space than traditional HDD form factors. [102]
Form factors which were more common to memory modules are now being used by SSDs to take advantage of their flexibility in laying out the components. Some of these include PCIe, mini PCIe, mini-DIMM, MO-297, and many more. [103] The SATADIMM from Viking Technology uses an empty DDR3 DIMM slot on the motherboard to provide power to the SSD with a separate SATA connector to provide the data connection back to the computer. The result is an easy-to-install SSD with a capacity equal to drives that typically take a full 2.5-inch drive bay. [104] At least one manufacturer, Innodisk, has produced a drive that sits directly on the SATA connector (SATADOM) on the motherboard without any need for a power cable. [105] Some SSDs are based on the PCIe form factor and connect both the data interface and power through the PCIe connector to the host. These drives can use either direct PCIe flash controllers [106] or a PCIe-to-SATA bridge device which then connects to SATA flash controllers. [107]
There are also SSDs that are in the form of PCIe cards, these are sometimes called HHHL (Half Height Half Length), or AIC (Add in Card) SSDs. [108] [109] [110]
In the early 2000s, a few companies introduced SSDs in Ball Grid Array (BGA) form factors, such as M-Systems' (now SanDisk) DiskOnChip [111] and Silicon Storage Technology's NANDrive [112] [113] (now produced by Greenliant Systems), and Memoright's M1000 [114] for use in embedded systems. The main benefits of BGA SSDs are their low power consumption, small chip package size to fit into compact subsystems, and that they can be soldered directly onto a system motherboard to reduce adverse effects from vibration and shock. [115]
Such embedded drives often adhere to the eMMC and eUFS standards.
The first devices resembling solid-state drives (SSDs) used semiconductor technology, with an early example being the 1978 StorageTek STC 4305. This device was a plug-compatible replacement for the IBM 2305 hard drive, initially using charge-coupled devices for storage and later switching to dynamic random-access memory (DRAM). The STC 4305 was significantly faster than its mechanical counterparts and cost around $400,000 for a 45 MB capacity. [116] Though early SSD-like devices existed, they were not widely used due to their high cost and small storage capacity.
In the late 1980s, companies like Zitel began selling DRAM-based SSD products under the name "RAMDisk." These devices were primarily used in specialized systems like those made by UNIVAC and Perkin-Elmer.
Parameter | Started with | Developed to | Improvement |
---|---|---|---|
Capacity | 20 MB | 100 TB [117] | 5-million-to-one [118] |
Sequential read speed | 49.3 MB/s [119] | 15 GB/s [120] | 304.25-to-one [121] |
Sequential write speed | 80 MB/s [122] [123] | 15.200 GB/s [120] | 190-to-one [124] |
IOPS | 79 [119] | 2,500,000 [120] | 31,645.56-to-one [125] |
Access time | 0.5 ms [119] | 0.045 ms read, 0.013 ms write [126] | Read:11-to-one, [127] Write: 38-to-one [128] |
Price | US$50,000 per gigabyte [129] | US$0.05 per gigabyte [130] | 10,000,000-to-one [131] |
Flash memory, a key component in modern SSDs, was invented in 1980 by Fujio Masuoka at Toshiba. [132] [133] Flash-based SSDs were patented in 1989 by the founders of SanDisk, [134] which released its first product in 1991: a 20 MB SSD for IBM laptops. [135] While the storage capacity was limited and the price high (around $1,000), this marked the beginning of a transition to flash memory as an alternative to traditional hard drives. [136]
In the 1990s, new manufacturers of flash memory drives emerged, including STEC, Inc., [137] M-Systems, [138] [139] and BiTMICRO. [140] [141]
As the technology advanced, SSDs saw dramatic improvements in capacity, speed, and affordability. [142] [143] [144] [145] By 2016, commercially available SSDs had more capacity than the largest available HDDs. [146] [147] [148] [149] [150] By 2018, flash-based SSDs had reached capacities of up to 100 TB in enterprise products, with consumer SSDs offering up to 16 TB. [117] These advancements were accompanied by significant increases in read and write speeds, with some high-end consumer models reaching speeds of up to 14.5 GB/s. [120]
In 2021, NVMe 2.0 with Zoned Namespaces (ZNS) was announced. ZNS allows data to be mapped directly to its physical location in memory, providing direct access on an SSD without a flash translation layer. [151] In 2024, Samsung announced what it called the world's first SSD with a hybrid PCIe interface, the Samsung 990 EVO. The hybrid interface runs in either the x4 PCIe 4.0 or x2 PCIe 5.0 modes, a first for an M.2 SSD. [152]
SSD prices have also fallen dramatically, with the cost per gigabyte decreasing from around $50,000 in 1991 to less than $0.05 by 2020. [130]
Enterprise flash drives (EFDs) are designed for high-performance applications requiring fast input/output operations per second (IOPS), reliability, and energy efficiency. EFDs often have higher specifications than consumer SSDs, making them suitable for mission-critical applications. The term was first used by EMC in 2008 to describe SSDs built for enterprise environments. [153] [154]
One example of an EFD is the Intel DC S3700 series, launched in 2012. These drives were notable for their consistent performance, maintaining IOPS variation within a narrow range, which is crucial for enterprise environments. [155]
Another significant product is the Toshiba PX02SS series, launched in 2016. Designed for write-intensive applications like online transaction processing, these drives achieved impressive read and write speeds and high endurance ratings. [156]
In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand. Unlike NAND flash, 3D XPoint uses a different method to store data, offering higher IOPS performance, although sequential read and write speeds remain slower compared to traditional SSDs. [157]
As SSD technology continues to improve, they are increasingly used in ultra-mobile PCs and lightweight laptop systems. The first flash-memory SSD based PC to become available was the Sony Vaio UX90, announced for pre-order on 27 June 2006 and began shipping in Japan on 3 July 2006 with a 16 GB flash memory hard drive. [158] Another of the first mainstream releases of SSD was the XO Laptop, built as part of the One Laptop Per Child project. Mass production of these computers, built for children in developing countries, began in December 2007. By 2009, Dell, [159] [160] [161] Toshiba, [162] [163] Asus, [164] Apple, [165] and Lenovo [166] had begun producing laptops with SSDs.
By 2010, Apple's MacBook Air line began using solid state drives as the default. [167] [165] In 2011, Intel's Ultrabook became the first widely available consumer computers using SSDs aside from the MacBook Air. [168] At present, SDD devices are widely used and distributed by a number of companies, with a small number of companies manufacturing the NAND flash devices within them. [169]
This section needs to be updated.(April 2018) |
SSD shipments were 11 million units in 2009, [170] 17.3 million units in 2011 [171] for a total of US$5 billion, [172] 39 million units in 2012, and were expected to rise to 83 million units in 2013 [173] to 201.4 million units in 2016 [171] and to 227 million units in 2017. [174]
Revenues for the SSD market worldwide totaled $585 million in 2008, rising over 100% from $259 million in 2007. [175]
The same file systems used on hard disk drives can typically also be used on solid state drives. File systems that support SSDs generally also support the TRIM command, which helps the SSD to recycle discarded data. The file system does not need to manage wear leveling or other flash memory characteristics, as they are handled internally by the SSD. Some log-structured file systems (e.g. F2FS, JFFS2) help to reduce write amplification on SSDs, especially in situations where only very small amounts of data are changed, such as when updating file-system metadata.
If an operating system does not support using TRIM on discrete swap partitions, it might be possible to use swap files inside an ordinary file system instead. For example, OS X does not support swap partitions; it only swaps to files within a file system, so it can use TRIM when, for example, swap files are deleted.[ citation needed ]
Since 2010, standard Linux drive utilities have taken care of appropriate partition alignment by default. [176]
Kernel support for the TRIM operation was introduced in version 2.6.33 of the Linux kernel mainline, released on 24 February 2010. [177] The ext4, Btrfs, XFS, JFS, and F2FS file systems include support for the discard (TRIM or UNMAP) function. To make use of TRIM, a file system must be mounted using the discard
parameter. Linux swap partitions are by default performing discard operations when the underlying drive supports TRIM, with the possibility to turn them off. [178] [179] [180] Support for queued TRIM, a SATA 3.1 feature that results in TRIM commands not disrupting the command queues, was introduced in Linux kernel 3.12, released on November 2, 2013. [181]
An alternative to the kernel-level TRIM operation is to use a user-space utility called fstrim that goes through all of the unused blocks in a filesystem and dispatches TRIM commands for those areas. Thefstrimutility is usually run by cron as a scheduled task. [182]
During installation, Linux distributions usually do not configure the installed system to use TRIM and thus the /etc/fstab
file requires manual modifications. [183] This is because the current Linux TRIM command implementation might not be optimal. [184] It has been proven to cause a performance degradation instead of a performance increase under certain circumstances. [185] [186] As of January 2014, [update] Linux sends an individual TRIM command to each sector, instead of a vectorized list defining a TRIM range as recommended by the TRIM specification. [187]
For performance reasons, it is recommended to switch the I/O scheduler from the default CFQ (Completely Fair Queuing) to NOOP or Deadline. CFQ was designed for traditional magnetic media and seek optimization, thus many of those I/O scheduling efforts are wasted when used with SSDs. As part of their designs, SSDs offer much bigger levels of parallelism for I/O operations, so it is preferable to leave scheduling decisions to their internal logic, especially for high-end SSDs. [188] [189]
A scalable block layer for high-performance SSD storage, known as blk-multiqueue or blk-mq and developed primarily by Fusion-io engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVMe by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelization. As of version 4.0 of the Linux kernel, released on 12 April 2015, VirtIO block driver, the SCSI layer (which is used by Serial ATA drivers), device mapper framework, loop device driver, unsorted block images (UBI) driver (which implements erase block management layer for flash memory devices) and RBD driver (which exports Ceph RADOS objects as block devices) have been modified to actually use this new interface; other drivers will be ported in the following releases. [190] [191] [192] [193] [194]
Versions since Mac OS X 10.6.8 (Snow Leopard) support TRIM but only when used with an Apple-purchased SSD. [195] TRIM is not automatically enabled for third-party drives, although it can be enabled by using third-party utilities such as Trim Enabler. The status of TRIM can be checked in the System Information application or in the system_profiler
command-line tool.
Versions since OS X 10.10.4 (Yosemite) include sudo trimforce enable
as a Terminal command that enables TRIM on non-Apple SSDs. [196] There is also a technique to enable TRIM in versions earlier than Mac OS X 10.6.8, although it remains uncertain whether TRIM is actually utilized properly in those cases. [197]
Prior to version 7, Microsoft Windows did not take any specific measures to support solid state drives. From Windows 7, the standard NTFS file system provides support for the TRIM command. [198]
By default, Windows 7 and newer versions execute TRIM commands automatically if the device is detected to be a solid-state drive. However, because TRIM irreversibly resets all freed space, it may be desirable to disable support where enabling data recovery is preferred over wear leveling. [199] Windows implements TRIM for more than just file-delete operations. The TRIM operation is fully integrated with partition- and volume-level commands such as format and delete, with file-system commands relating to truncate and compression, and with the System Restore (also known as Volume Snapshot) feature. [200]
Defragmentation should be disabled on solid-state drives because the location of the file components on an SSD does not significantly impact its performance, but moving the files to make them contiguous using the Windows Defrag routine will cause unnecessary write wear on the limited number of write cycles on the SSD. The SuperFetch feature will also not materially improve performance and causes additional overhead in the system and SSD. [201]
Windows Vista generally expects hard disk drives rather than SSDs. [202] [203] Windows Vista includes ReadyBoost to exploit characteristics of USB-connected flash devices, but for SSDs it only improves the default partition alignment to prevent read-modify-write operations that reduce the speed of SSDs. Most SSDs are typically split into 4 KiB sectors, while earlier systems may be based on 512 byte sectors with their default partition setups unaligned to the 4 KiB boundaries. [204] Windows Vista does not send the TRIM command to solid-state drives, but some third-party utilities such as SSD Doctor will periodically scan the drive and TRIM the appropriate entries. [205]
Windows 7 and later versions have native support for SSDs. [200] [206] The operating system detects the presence of an SSD and optimizes operation accordingly. For SSD devices, Windows 7 disables ReadyBoost and automatic defragmentation. [207] Despite the initial statement by Steven Sinofsky before the release of Windows 7, [200] however, defragmentation is not disabled, even though its behavior on SSDs differs. [208] One reason is the low performance of Volume Shadow Copy Service on fragmented SSDs. [208] The second reason is to avoid reaching the practical maximum number of file fragments that a volume can handle. [208]
Windows 7 also includes support for the TRIM command to reduce garbage collection for data that the operating system has already determined is no longer valid. [209] [210]
Windows 8.1 and later Windows systems also support automatic TRIM for PCI Express SSDs based on NVMe. For Windows 7, the KB2990941 update is required for this functionality and needs to be integrated into Windows Setup using DISM if Windows 7 has to be installed on the NVMe SSD. Windows 8/8.1 also supports the SCSI unmap command, an analog of SATA TRIM, for USB-attached SSDs or SATA-to-USB enclosures. It is also supported over USB Attached SCSI Protocol (UASP).
While Windows 7 supported automatic TRIM for internal SATA SSDs, Windows 8.1 and Windows 10 support manual TRIM as well as automatic TRIM for SATA, NVMe and USB-attached SSDs. Disk Defragmenter in Windows 10 and 11 may execute TRIM to optimize an SSD. [211]
Solaris as of version 10 Update 6 (released in October 2008), and recent[ when? ] versions of OpenSolaris, Solaris Express Community Edition, Illumos, Linux with ZFS on Linux, and FreeBSD all can use SSDs as a performance booster for ZFS. A low-latency SSD can be used for the ZFS Intent Log (ZIL), where it is named the SLOG. An SSD may also be used for the level 2 Adaptive Replacement Cache (L2ARC), which is used to cache data for reading. [212]
ZFS for FreeBSD introduced support for TRIM on September 23, 2012. [213] The Unix File System also supports the TRIM command. [214]
The following are noted standardization organizations and bodies that work to create standards for solid-state drives (and other computer storage devices). The table below also includes organizations which promote the use of solid-state drives. This is not necessarily an exhaustive list.
Organization or committee | Subcommittee of: | Purpose |
---|---|---|
INCITS | — | Coordinates technical standards activity between ANSI in the US and joint ISO/IEC committees worldwide |
T10 | INCITS | SCSI |
T11 | INCITS | FC |
T13 | INCITS | ATA |
JEDEC | — | Develops open standards and publications for the microelectronics industry |
JC-64.8 | JEDEC | Focuses on solid-state drive standards and publications |
NVMHCI | — | Provides standard software and hardware programming interfaces for nonvolatile memory subsystems |
SATA-IO | — | Provides the industry with guidance and support for implementing the SATA specification |
SFF Committee | — | Works on storage industry standards needing attention when not addressed by other standards committees |
SNIA | — | Develops and promotes standards, technologies, and educational services in the management of information |
SSSI | SNIA | Fosters the growth and success of solid state storage |
A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data when powered off. Modern HDDs are typically in the form of a small rectangular box.
Western Digital Corporation is an American data storage company headquartered in San Jose, California. It has a decades-long history in the electronics industry as an integrated circuit and data memory technology developer. It is one of the world's largest computer hard disk drive (HDD) manufacturers, along with producing solid-state drives (SSDs) and flash memory devices.
CompactFlash (CF) is a flash memory mass storage device used mainly in portable electronic devices. The format was specified and the devices were first manufactured by SanDisk in 1994.
In computing, Native Command Queuing (NCQ) is an extension of the Serial ATA protocol allowing hard disk drives to internally optimize the order in which received read and write commands are executed. This can reduce the amount of unnecessary drive head movement, resulting in increased performance for workloads where multiple simultaneous read/write requests are outstanding, most often occurring in server-type applications.
Input/output operations per second is an input/output performance measurement used to characterize computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). Like benchmarks, IOPS numbers published by storage device manufacturers do not directly relate to real-world application performance.
A hybrid drive is a logical or physical computer storage device that combines a faster storage medium such as solid-state drive (SSD) with a higher-capacity hard disk drive (HDD). The intent is adding some of the speed of SSDs to the cost-effective storage capacity of traditional HDDs. The purpose of the SSD in a hybrid drive is to act as a cache for the data stored on the HDD, improving the overall performance by keeping copies of the most frequently used data on the faster SSD drive.
Phison Electronics Corporation is a Taiwanese public electronics company that primarily designs, manufactures and sells controllers for NAND flash memory chips. These are integrated into flash-based products such as USB flash drives, memory cards, and solid-state drives (SSDs).
JMicron Technology Corporation is a Taiwan-based fabless technology design house based in Hsinchu, Taiwan. As a manufacturer of integrated circuits, they produce controller chips for bridge devices.
A trim command allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered to be "in use" and therefore can be erased internally.
Write amplification (WA) is an undesirable phenomenon associated with flash memory and solid-state drives (SSDs) where the actual amount of information physically written to the storage media is a multiple of the logical amount intended to be written.
The Intel X25-M was a line of Serial ATA interface solid-state drives developed by Intel for personal computers, announced in late 2008. The SSD was a multi-level-cell solid-state drive available in a 2.5" form factor, came in 80 GB and 160 GB capacities and utilized NAND flash memory on a 50 nm process. The second-generation SSD which was called the "X25-M G2". The X25-M G2 was also available in a 2.5" form factor and 80 GB and 160 GB capacities, but with NAND flash memory on a more efficient 34 nm process.
NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open, logical-device interface specification for accessing a computer's non-volatile storage media usually attached via the PCI Express bus. The initial NVM stands for non-volatile memory, which is often NAND flash memory that comes in several physical form factors, including solid-state drives (SSDs), PCIe add-in cards, and M.2 cards, the successor to mSATA cards. NVM Express, as a logical-device interface, has been designed to capitalize on the low latency and internal parallelism of solid-state storage devices.
Flashcache is a disk cache component for the Linux kernel, initially developed by Facebook since April 2010, and released as open source in 2011. Since January 2013, there is a fork of Flashcache, named EnhanceIO and developed by sTec, Inc. Since 2015 that fork became unmaintained and it was forked again and maintained by individuals.
M.2, pronounced m dot two and formerly known as the Next Generation Form Factor (NGFF), is a specification for internally mounted computer expansion cards and associated connectors. M.2 replaces the Mini SATA (mSATA) standard and the Mini PCIe (mPCIe) standard. Employing a more flexible physical specification, M.2 allows different module widths and lengths, which, paired with the availability of more advanced interfacing features, makes M.2 more suitable than mSATA in general for solid-state storage applications, particularly in smaller devices such as ultrabooks and tablets.
SATA Express is a computer bus interface that supports both Serial ATA (SATA) and PCI Express (PCIe) storage devices, initially standardized in the SATA 3.2 specification. The SATA Express connector used on the host side is backward compatible with the standard SATA data connector, while it also provides two PCI Express lanes as a pure PCI Express connection to the storage device.
Solid-state storage (SSS) is non-volatile computer storage that has no moving parts; it uses only electronic circuits. This solid-state design dramatically differs from the commonly-used competing technology of electromechanical magnetic storage which uses moving media coated with magnetic material. Generally, SSS is much faster but more expensive for the same amount of storage.
3D XPoint is a discontinued non-volatile memory (NVM) technology developed jointly by Intel and Micron Technology. It was announced in July 2015 and was available on the open market under the brand name Optane (Intel) from April 2017 to July 2022. Bit storage is based on a change of bulk resistance, in conjunction with a stackable cross-grid data access array, using a phenomenon known as Ovonic Threshold Switch (OTS). Initial prices are less than dynamic random-access memory (DRAM) but more than flash memory.
In-situ processing also known as in-storage processing (ISP) is a computer science term that refers to processing data where it resides. In-situ means "situated in the original, natural, or existing place or position." An in-situ process processes data where it is stored, such as in solid-state drives (SSDs) or memory devices like NVDIMM, rather than sending the data to a computer's central processing unit (CPU).
IBM FlashCore Modules (FCM) are solid state technology computer data storage modules using PCI Express attachment and the NVMe command set. They are offered as an alternative to industry-standard 2.5" NVMe SSDs in selected arrays from the IBM FlashSystem family, with raw storage capacities of 4.8 TB, 9.6 TB, 19.2 TB and 38.4 TB. FlashCore modules support hardware self-encryption and real-time inline hardware data compression up to 115.2 TB address space, without performance impact.
{{cite magazine}}
: Cite magazine requires |magazine=
(help)products will be available in 2016, in both standard SSD (PCIe) form factors for everything from Ultrabooks to servers, and in a DIMM form factor for Xeon systems for even greater bandwidth and lower latencies. As expected, Intel will be providing storage controllers optimized for the 3D XPoint memory