Non-standard RAID levels

Last updated

Although all RAID implementations differ from the specification to some extent, some companies and open-source projects have developed non-standard RAID implementations that differ substantially from the standard. Additionally, there are non-RAID drive architectures, providing configurations of multiple hard drives not referred to by RAID acronyms.

Contents

RAID-DP

Row diagonal parity is a scheme where one dedicated disk of parity is in a horizontal "row" like in RAID 4, but the other dedicated parity is calculated from blocks permuted ("diagonal") like in RAID 5 and 6. [1] Alternative terms for "row" and "diagonal" include "dedicated" and "distributed". [2] Invented by NetApp, it is offered as RAID-DP in their ONTAP systems. [3] The technique can be considered RAID 6 in the broad SNIA definition [4] and has the same failure characteristics as RAID 6. The performance penalty of RAID-DP is typically under 2% when compared to a similar RAID 4 configuration. [5]

RAID 5E, RAID 5EE, and RAID 6E

RAID 5E, RAID 5EE, and RAID 6E (with the added E standing for Enhanced) generally refer to variants of RAID 5 or 6 with an integrated hot-spare drive, where the spare drive is an active part of the block rotation scheme. This spreads I/O across all drives, including the spare, thus reducing the load on each drive, increasing performance. It does, however, prevent sharing the spare drive among multiple arrays, which is occasionally desirable. [6]

Intel Matrix RAID

Diagram of an Intel Matrix RAID setup RAID MATRIX.png
Diagram of an Intel Matrix RAID setup

Intel Matrix RAID (a feature of Intel Rapid Storage Technology) is a feature (not a RAID level) present in the ICH6R and subsequent Southbridge chipsets from Intel, accessible and configurable via the RAID BIOS setup utility. Matrix RAID supports as few as two physical disks or as many as the controller supports. The distinguishing feature of Matrix RAID is that it allows any assortment of RAID 0, 1, 5, or 10 volumes in the array, to which a controllable (and identical) portion of each disk is allocated. [7] [8] [9]

As such, a Matrix RAID array can improve both performance and data integrity. A practical instance of this would use a small RAID 0 (stripe) volume for the operating system, program, and paging files; second larger RAID 1 (mirror) volume would store critical data. Linux MD RAID is also capable of this. [7] [8] [9]

Linux MD RAID 10

The software RAID subsystem provided by the Linux kernel, called md, supports the creation of both classic (nested) RAID 1+0 arrays, and non-standard RAID arrays that use a single-level RAID layout with some additional features. [10] [11]

The standard "near" layout, in which each chunk is repeated n times in a k-way stripe array, is equivalent to the standard RAID 10 arrangement, but it does not require that n evenly divides k. For example, an n2 layout on two, three, and four drives would look like: [12] [13]

2 drives         3 drives          4 drives --------         ----------        -------------- A1  A1           A1  A1  A2        A1  A1  A2  A2 A2  A2           A2  A3  A3        A3  A3  A4  A4 A3  A3           A4  A4  A5        A5  A5  A6  A6 A4  A4           A5  A6  A6        A7  A7  A8  A8 .. ..        .. .. ..     .. .. .. ..

The four-drive example is identical to a standard RAID 1+0 array, while the three-drive example is a software implementation of RAID 1E. The two-drive example is equivalent to RAID 1. [13]

The driver also supports a "far" layout, in which all the drives are divided into f sections. All the chunks are repeated in each section but are switched in groups (for example, in pairs). For example, f2 layouts on two-, three-, and four-drive arrays would look like this: [12] [13]

2 drives             3 drives             4 drives --------             ------------         ------------------ A1  A2               A1   A2   A3         A1   A2   A3   A4 A3  A4               A4   A5   A6         A5   A6   A7   A8 A5  A6               A7   A8   A9         A9   A10  A11  A12 .. ..            .. .. ..      .. .. .. .. A2  A1               A3   A1   A2         A2   A1   A4   A3 A4  A3               A6   A4   A5         A6   A5   A8   A7 A6  A5               A9   A7   A8         A10  A9   A12  A11 .. ..            .. .. ..      .. .. .. ..

"Far" layout is designed for offering striping performance on a mirrored array; sequential reads can be striped, as in RAID 0 configurations. [14] Random reads are somewhat faster, while sequential and random writes offer about equal speed to other mirrored RAID configurations. "Far" layout performs well for systems in which reads are more frequent than writes, which is a common case. For a comparison, regular RAID 1 as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel. [15]

The md driver also supports an "offset" layout, in which each stripe is repeated o times and offset by f (far) devices. For example, o2 layouts on two-, three-, and four-drive arrays are laid out as: [12] [13]

2 drives       3 drives           4 drives --------       ----------         --------------- A1  A2         A1  A2  A3         A1  A2  A3  A4 A2  A1         A3  A1  A2         A4  A1  A2  A3 A3  A4         A4  A5  A6         A5  A6  A7  A8 A4  A3         A6  A4  A5         A8  A5  A6  A7 A5  A6         A7  A8  A9         A9  A10 A11 A12 A6  A5         A9  A7  A8         A12 A9  A10 A11 .. ..      .. .. ..      .. .. .. ..

It is also possible to combine "near" and "offset" layouts (but not "far" and "offset"). [13]

In the examples above, k is the number of drives, while n#, f#, and o# are given as parameters to mdadm's --layout option. Linux software RAID (Linux kernel's md driver) also supports creation of standard RAID 0, 1, 4, 5, and 6 configurations. [16] [17]

RAID 1E

Diagram of a RAID 1E setup RAID 1E.png
Diagram of a RAID 1E setup

Some RAID 1 implementations treat arrays with more than two disks differently, creating a non-standard RAID level known as RAID 1E. In this layout, data striping is combined with mirroring, by mirroring each written stripe to one of the remaining disks in the array. Usable capacity of a RAID 1E array is 50% of the total capacity of all drives forming the array; if drives of different sizes are used, only the portions equal to the size of smallest member are utilized on each drive. [18] [19]

One of the benefits of RAID 1E over usual RAID 1 mirrored pairs is that the performance of random read operations remains above the performance of a single drive even in a degraded array. [18]

RAID-Z

The ZFS filesystem provides RAID-Z, a data/parity distribution scheme similar to RAID 5, but using dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the write hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual read–modify–write sequence. RAID-Z does not require any special hardware, such as NVRAM for reliability, or write buffering for performance. [20]

Given the dynamic nature of RAID-Z's stripe width, RAID-Z reconstruction must traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this. [20]

In addition to handling whole-disk failures, RAID-Z can also detect and correct silent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor. [20]

There are five different RAID-Z modes: RAID-Z0 (similar to RAID 0, offers no redundancy), RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), RAID-Z3 (a RAID 7 [lower-alpha 1] configuration, allows three disks to fail), and mirror (similar to RAID 1, allows all but one of the disks to fail). [22]

Drive Extender

Windows Home Server Drive Extender is a specialized case of JBOD RAID 1 implemented at the file system level. [23]

Microsoft announced in 2011 that Drive Extender would no longer be included as part of Windows Home Server Version 2, Windows Home Server 2011 (codename VAIL). [24] As a result, there has been a third-party vendor move to fill the void left by DE. Included competitors are Division M, the developers of Drive Bender, and StableBit's DrivePool. [25] [26]

BeyondRAID

BeyondRAID is not a true RAID extension, but consolidates up to 12 SATA hard drives into one pool of storage. [27] It has the advantage of supporting multiple disk sizes at once, much like JBOD, while providing redundancy for all disks and allowing a hot-swap upgrade at any time. Internally it uses a mix of techniques similar to RAID 1 and 5. Depending on the fraction of data in relation to capacity, it can survive up to three drive failures,[ citation needed ] if the "array" can be restored onto the remaining good disks before another drive fails. The amount of usable storage can be approximated by summing the capacities of the disks and subtracting the capacity of the largest disk. For example, if a 500, 400, 200, and 100 GB drive were installed, the approximate usable capacity would be 500 + 400 + 200 + 100 500 = 700 GB of usable space. Internally the data would be distributed in two RAID 5–like arrays and two RAID 1-like sets:

           Drives  | 100 GB |  | 200 GB |  | 400 GB |  | 500 GB |                                       ----------                                      |   x    | unusable space (100 GB)                                      ----------                          ----------  ----------                          |   A1   |  |   A1   | RAID 1 set (2× 100 GB)                          ----------  ----------                          ----------  ----------                          |   B1   |  |   B1   | RAID 1 set (2× 100 GB)                          ----------  ----------              ----------  ----------  ----------              |   C1   |  |   C2   |  |   Cp   | RAID 5 array (3× 100 GB)              ----------  ----------  ----------  ----------  ----------  ----------  ----------  |   D1   |  |   D2   |  |   D3   |  |   Dp   | RAID 5 array (4× 100 GB)  ----------  ----------  ----------  ---------- 

BeyondRaid offers a RAID 6–like feature and can perform hash-based compression using 160-bit SHA-1 hashes to maximize storage efficiency. [28]

Unraid

Unraid is a proprietary Linux-based operating system optimized for media file storage. [29]
Unfortunately Unraid doesn't provide information about its storage technology, but some[ who? ] say its parity array is a rewrite of the mdadm module.

Disadvantages include closed-source code, high price[ citation needed ], slower write performance than a single disk[ citation needed ] and bottlenecks when multiple drives are written concurrently. However, Unraid allows support of a cache pool which can dramatically speed up the write performance. Cache pool data can be temporarily protected using Btrfs RAID 1 until Unraid moves it to the array based on a schedule set within the software.[ citation needed ]

Advantages include lower power consumption than standard RAID levels, the ability to use multiple hard drives with differing sizes to their full capacity and in the event of multiple concurrent hard drive failures (exceeding the redundancy), only losing the data stored on the failed hard drives compared to standard RAID levels which offer striping in which case all of the data on the array is lost when more hard drives fail than the redundancy can handle. [30]

CRYPTO softraid

In OpenBSD, CRYPTO is an encrypting discipline for the softraid subsystem. It encrypts data on a single chunk to provide for data confidentiality. CRYPTO does not provide redundancy. [31] RAID 1C provides both redundancy and encryption. [31]

DUP profile

Some filesystems, such as Btrfs, [32] and ZFS/OpenZFS (with per-dataset copies=1|2|3 property), [33] support creating multiple copies of the same data on a single drive or disks pool, protecting from individual bad sectors, but not from large numbers of bad sectors or complete drive failure. This allows some of the benefits of RAID on computers that can only accept a single drive, such as laptops.

Declustered RAID

Declustered RAID allows for arbitrarily sized disk arrays while reducing the overhead to clients when recovering from disk failures. It uniformly spreads or declusters user data, redundancy information, and spare space across all the disks of a declustered array. Under traditional RAID, an entire disk storage system of, say, 100 disks would be split into multiple arrays each of, say, 10 disks. By contrast, under declustered RAID, the entire storage system is used to make one array. Every data item is written twice, as in mirroring, but logically adjacent data and copies are spread arbitrarily. When a disk fails, erased data is rebuilt using all the operational disks in the array, the bandwidth of which is greater than that of the fewer disks of a conventional RAID group. Furthermore, if an additional disk fault occurs during a rebuild, the number of impacted tracks requiring repair is markedly less than the previous failure and less than the constant rebuild overhead of a conventional array. The decrease in declustered rebuild impact and client overhead can be a factor of three to four times less than a conventional RAID. File system performance becomes less dependent upon the speed of any single rebuilding storage array. [34]

Dynamic disk pooling (DDP), also known as D-RAID, maintains performance even when up to 2 drives fail simultaneously. [35] DDP is a high performance type of declustered RAID. [36]

See also

Explanatory notes

  1. While RAID 7 is not a standard RAID level, it has been proposed as a catch-all term for any >2 parity RAID configuration [21]

Related Research Articles

RAID is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This is in contrast to the previous concept of highly reliable mainframe disk drives referred to as "single large expensive disk" (SLED).

<span class="mw-page-title-main">Data striping</span> Data segmentation technique

In computer data storage, data striping is the technique of segmenting logically sequential data, such as a file, so that consecutive segments are stored on different physical storage devices.

A-class submarine (1903) 1902 class of British submarines

The A-class was the Royal Navy's first class of British-designed submarines. Thirteen were built by Vickers at Barrow-in-Furness between 1902 and 1905 as an improvement on the US Plunger class.

<span class="mw-page-title-main">Air Staff (United States)</span> US Air Force headquarters staff

The Air Staff is one of the Department of the Air Force's two statutorily designated headquarters staffs: the other staff is the Office of the Secretary of the Air Force, also known as the Secretariat. The Air Staff is headed by the Chief of Staff of the Air Force General David Allvin. The Air Staff is primarily composed of uniformed United States Air Force officials who assist the Chief of Staff in carrying out his dual-hatted role: as the principal military advisor to the Secretary of the Air Force, and as a member of the Joint Chiefs of Staff.

<span class="mw-page-title-main">Joint Force Air Component Headquarters</span> Military unit

The Joint Force Air Component Headquarters (JFACHQ) is the United Kingdom's deployable air command and control unit. The JFACHQ is run by the Royal Air Force with representation from the other services.

This article gives an overview of the aquatic communities in the British National Vegetation Classification system.

In computer storage, the standard RAID levels comprise a basic set of RAID configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives (HDDs). The most common types are RAID 0 (striping), RAID 1 (mirroring) and its variants, RAID 5, and RAID 6. Multiple RAID levels can also be combined or nested, for instance RAID 10 or RAID 01. RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard. The numerical values only serve as identifiers and do not signify performance, reliability, generation, or any other metric.

Nested RAID levels, also known as hybrid RAID, combine two or more of the standard RAID levels to gain performance, additional redundancy or both, as a result of combining properties of different standard RAID layouts.

<span class="mw-page-title-main">Dynabook Tecra</span>

The Tecra is a series of business laptops currently manufactured by Dynabook Inc., a subsidiary of Sharp Corporation formerly owned by Toshiba. The number of Tecra notebook models available for sale is strictly dependent on the location: North and South America, Europe, Africa and South Africa, the Middle East or the South Pacific region.

<i>Backtrackin</i> 1984 compilation album by Eric Clapton

Backtrackin' is a two-disc compilation album by Eric Clapton spanning the years 1966 to 1980. It was released in 1984. The compilation contains all of Clapton's best known songs with Cream, Blind Faith, Derek and the Dominos, and his solo 1970s work through his 1980 live album Just One Night. This compilation album is made in Germany and is only available in the United States as an import. It was originally released by Starblend Records, and has since been reissued by Polydor Records. This 2 CD compilation is currently out of print in some markets while still available in some form in others.

<i>A Trip to Marineville</i> 1979 studio album by Swell Maps

A Trip to Marineville is the debut studio album by English art punk band Swell Maps. It was released in June 1979, through record labels Rather and Rough Trade.

<span class="mw-page-title-main">Motorways in Serbia</span> System of numbered routes in Serbia

Motorways in Serbia are called auto-put, a name which simply means car-road. Roads that are motorways are categorized as state roads of IA category and are marked with one-digit numbers. Motorways in Serbia have three lanes in each direction, signs are white-on-green, and the normal speed limit is 130 km/h (81 mph). They are maintained and operated by the national road operator company JP "Putevi Srbije".

The most widespread standard for configuring multiple hard disk drives is RAID, which comes in a number of standard configurations and non-standard configurations. Non-RAID drive architectures also exist, and are referred to by acronyms with tongue-in-cheek similarity to RAID:

<span class="mw-page-title-main">T45 (classification)</span>

T45 is disability sport classification in disability athletics for people with double above or below the elbow amputations, or a similar disability. The class includes people who are ISOD classes A5 and A7. The nature of the disability of people in this class can make them prone to overuse injuries. The classification process to be included in this class has four parts: a medical exam, observation during training, observation during competition and then being classified into this class.

T47 is a disability sport classification for disability athletics primarily for competitors with a below elbow or wrist amputation or impairment. T47 is a classification for track events but unlike the other T40 to T46 classifications it has no equivalent F47 classification for field events. The amputee sports equivalent class is ISOD the A8 class. People in this class can have injuries as a result of over use of their remaining upper limb.

A5 is an amputee sport classification used by the International Sports Organization for the Disabled (ISOD).for people with acquired or congenital amputations. A5 sportspeople are people who have both arms amputated above or through the elbow joint. Their amputations impact their sport performance, including being more prone to overuse injuries. Sports people in this class are eligible to participate in include athletics, swimming, cycling, lawn bowls, and sitzball.

A6 is an amputee sport classification used by the International Sports Organization for the Disabled (ISOD) for people with acquired or congenital amputations. People in this class have one arm amputated above or through the elbow joint. Their amputations impact their sport performance, including being more prone to overuse injuries. Sports people in this class are eligible to participate in include athletics, swimming, cycling, amputee basketball, amputee football, lawn bowls, and sitzball.

A7 is an amputee sport classification used by the International Sports Organization for the Disabled (ISOD) for people with acquired or congenital amputations. A7 sportspeople have both arms amputated below the elbow, but through or above the wrist joint. Their amputations impact their sport performance, including being more prone to overuse injuries. Sports people in this class are eligible to participate in include athletics, swimming, cycling, lawn bowls, and sitzball.

A8 is an amputee sport classification used by the International Sports Organization for the Disabled (ISOD).for people with acquired or congenital amputations. People in this class have one arm amputated below the elbow, but through or above the wrist joint. Their amputations impact their sport performance, including being more prone to overuse injuries. Sports people in this class are eligible to participate in include athletics, swimming, cycling, amputee basketball, amputee football, lawn bowls, and sitzball.

ZFS is a file system with volume management capabilities. It began as part of the Sun Microsystems Solaris operating system in 2001. Large parts of Solaris, including ZFS, were published under an open source license as OpenSolaris for around 5 years from 2005 before being placed under a closed source license when Oracle Corporation acquired Sun in 2009–2010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X and FreeBSD. In 2010, the illumos project forked a recent version of OpenSolaris, including ZFS, to continue its development as an open source project. In 2013, OpenZFS was founded to coordinate the development of open source ZFS. OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems.

References

  1. Peter Corbett; Bob English; Atul Goel; Tomislav Grcanac; Steven Kleiman; James Leong & Sunitha Sankar (2004). "Row-Diagonal Parity for Double Disk Failure Correction" (PDF). USENIX Association. Archived (PDF) from the original on 2013-11-22. Retrieved 2013-11-22.
  2. Fischer, Werner. "RAID-DP". thomas-krenn. Retrieved 26 May 2023.
  3. White, Jay; Lueth, Chris; Bell, Jonathan (March 2003). "RAID-DP: NetApp Implementation of Double-Parity RAID for Data Protection" (PDF). NetApp.com. Network Appliance. Retrieved 2014-06-07.
  4. "Dictionary R". SNIA.org. Storage Networking Industry Association. Retrieved 2007-11-24.
  5. White, Jay; Alvarez, Carlos (October 2011). "Back to Basics: RAID-DP | NetApp Community". NetApp.com. NetApp. Retrieved 2014-08-25.
  6. "Non-standard RAID levels". RAIDRecoveryLabs.com. Archived from the original on 2013-12-15. Retrieved 2013-12-15.
  7. 1 2 "Intel's Matrix RAID Explored". The Tech Report. 2005-03-09. Retrieved 2014-04-02.
  8. 1 2 "Setting Up RAID Using Intel Matrix Storage Technology". HP.com. Hewlett Packard. Retrieved 2014-04-02.
  9. 1 2 "Intel Matrix Storage Technology". Intel.com. Intel. 2011-11-05. Retrieved 2014-04-02.
  10. "Creating Software RAID 10 Devices". SUSE. Retrieved 11 May 2016.
  11. "Nested RAID Levels". Arch Linux. Retrieved 11 May 2016.
  12. 1 2 3 "Creating a Complex RAID 10". SUSE. Retrieved 11 May 2016.
  13. 1 2 3 4 5 "Linux Software RAID 10 Layouts Performance: Near, Far, and Offset Benchmark Analysis". Ilsistemista.net. 2012-08-28. Archived from the original on 2023-03-24. Retrieved 2014-03-08.
  14. Jon Nelson (2008-07-10). "RAID5,6 and 10 Benchmarks on 2.6.25.5". Jamponi.net. Retrieved 2014-01-01.
  15. "Performance, Tools & General Bone-Headed Questions". TLDP.org. Retrieved 2014-01-01.
  16. "mdadm(8): manage MD devices aka Software RAID - Linux man page". Linux.Die.net. Retrieved 2014-03-08.
  17. "md(4): Multiple Device driver aka Software RAID - Linux man page". Die.net. Retrieved 2014-03-08.
  18. 1 2 "Which RAID Level is Right for Me?: RAID 1E (Striped Mirroring)". Adaptec . Retrieved 2014-01-02.
  19. "LSI 6 Gb/s Serial Attached SCSI (SAS) Integrated RAID: A Product Brief" (PDF). LSI Corporation. 2009. Archived from the original (PDF) on 2011-06-28. Retrieved 2015-01-02.
  20. 1 2 3 Bonwick, Jeff (2005-11-17). "RAID-Z". Jeff Bonwick's Blog. Oracle Blogs. Archived from the original on 2014-12-16. Retrieved 2015-02-01.
  21. Leventhal, Adam (2009-12-17). "Triple-Parity RAID and Beyond". Queue. 7 (11): 30. doi: 10.1145/1661785.1670144 .
  22. "ZFS Raidz Performance, Capacity and integrity". calomel.org. Archived from the original on 27 November 2017. Retrieved 23 June 2017.
  23. Separate from Windows' Logical Disk Manager
  24. "MS drops drive pooling from Windows Home Server". The Register .
  25. "Drive Bender Public Release Arriving This Week". We Got Served. Archived from the original on 2017-08-20. Retrieved 2014-01-15.
  26. "StableBit DrivePool 2 Year Review". Home Media Tech. December 2013.
  27. Data Robotics, Inc. implements BeyondRaid in their Drobostorage device.
  28. Detailed technical information about BeyondRaid, including how it handles adding and removing drives, is: US 20070266037,Julian Terry; Geoffrey Barrall& Neil Clarkson,"Filesystem-Aware Block Storage System, Apparatus, and Method", assigned to DROBO Inc
  29. "What is unRAID?". Lime-Technology.com. Lime Technology. 2013-10-17. Archived from the original on 2014-01-05. Retrieved 2014-01-15.
  30. "LimeTech – Technology". Lime-Technology.com. Lime Technology. 2013-10-17. Archived from the original on 2014-01-05. Retrieved 2014-02-09.
  31. 1 2 "Manual Pages: softraid(4)". OpenBSD.org. 2022-09-06. Retrieved 2022-09-08.
  32. "Manual Pages: mkfs.btrfs(8)". btrfs-progs. 2018-01-08. Retrieved 2018-08-17.
  33. "Maintenance Commands zfs - configures ZFS file system". illumos: manual page: zfs.1m.
  34. "Declustered RAID". IBM. 14 June 2019. Retrieved 1 February 2020.
  35. IBM. "Dynamic Disk Pooling (DDP)".
  36. "High Performance Computing: NEC GxFS Storage Appliance". p. 6.