NetApp FAS

Last updated

A NetApp FAS is a computer storage product by NetApp running the ONTAP operating system; the terms ONTAP, AFF, ASA, FAS are often used as synonyms. "Filer" is also used as a synonym although this is not an official name. There are three types of FAS systems: Hybrid, All-Flash, and All SAN Array:

Contents

  1. NetApp proprietary custom-build hardware appliances with HDD or SSD drives called hybrid Fabric-Attached Storage (or simply FAS) [1]
  2. NetApp proprietary custom-build hardware appliances with only SSD drives and optimized ONTAP for low latency called ALL-Flash FAS (or simply AFF)
  3. All SAN Array build on top of AFF platform, and provide only SAN-based data protocol connectivity.

ONTAP can serve storage over a network using file-based protocols such as NFS and SMB, also block-based protocols, such as the SCSI over the Fibre Channel Protocol on a Fibre Channel network, Fibre Channel over Ethernet (FCoE), iSCSI, and FC-NVMe transport layer. ONTAP-based systems that can serve both SAN and NAS protocols called Unified ONTAP, AFF systems with ASA identity called All-SAN.

NetApp storage systems running ONTAP implement their physical storage in large disk arrays.

While most large-storage systems are implemented with commodity computers with an operating system such as Microsoft Windows Server, VxWorks or tuned Linux, ONTAP-based hardware appliances use highly customized hardware and the proprietary Data ONTAP operating system with WAFL file system, all originally designed by NetApp founders David Hitz and James Lau specifically for storage-serving purposes. ONTAP is NetApp's internal operating system, specially optimized for storage functions at high and low levels. It boots from FreeBSD as a stand-alone kernel-space module and uses some functions of FreeBSD (command interpreter and drivers stack, for example).

All NetApp ONTAP-based hardware appliances have battery-backed non-volatile random access memory or NVDIMM, referred to as NVRAM or NVDIMM,[ citation needed ] which allows them to commit writes to stable storage more quickly than traditional systems with only volatile memory. Early storage systems connected to external disk enclosures via parallel SCSI, while modern models (as of 2009) use fibre channel and SAS (Serial Attach SCSI) SCSI transport protocols. The disk enclosures (shelves) use fibre channel hard disk drives, as well as parallel ATA, serial ATA and Serial attached SCSI. Starting with AFF A800 NVRAM PCI card no longer used for NVLOGs, it was replaced with NVDIMM memory directly connected to the memory bus.

Implementers often organize two storage systems in a high-availability cluster with a private high-speed link, either Fibre Channel, InfiniBand, 10 Gigabit Ethernet, 40 Gigabit Ethernet or 100 Gigabit Ethernet. One can additionally group such clusters together under a single namespace when running in the "cluster mode" of the Data ONTAP 8 operating system.

Internal architecture

NetApp FAS3240-R5 Wikimedia Foundation Servers-8055 01.jpg
NetApp FAS3240-R5

Modern NetApp FAS, AFF or ASA system consist of customized computers with Intel processors using PCI. Each FAS, AFF or ASA system has non-volatile random access memory, called NVRAM, in the form of a proprietary PCI NVRAM adapter or NVDIMM-based memory, to log all writes for performance and to play the data log forward in the event of an unplanned shutdown. One can link two storage systems together as a cluster, which NetApp (as of 2009) refers to using the less ambiguous term "Active/Active".

Hardware

Each storage system model comes with a set configuration of processor, RAM, and non-volatile memory, which users cannot expand after purchase. With the exception of some of the entry point storage controllers, the NetApp FAS, ASA, and AFF systems usually have at least one PCIe-based slot available for additional network, tape and/or disk connections. In June 2008 NetApp announced the Performance Acceleration Module (or PAM) to optimize the performance of workloads that carry out intensive random reads. This optional card goes into a PCIe slot and provides additional memory (or cache) between the disk and the storage system cache and system memory, thus improving performance.

AFF

All-Flash FAS, also known as AFF A-series. Usually, AFF systems based on the same hardware as FAS but first one optimized and works only with SSD drives on the back end while the second one can use HDD and SSD as a cache: for example, AFF A700 & FAS9000, A300 & FAS8200, A200 & FAS2600, A220 & FAS2700 use the same hardware, but AFF systems do not include Flash Cache cards. Also, AFF systems do not support FlexArray with third-party storage array virtualization functionality. AFF is a Unified system and can provide SAN & NAS data protocol connectivity, and in addition to traditional SAN & NAS protocols in FAS systems, AFF has block-based NVMe/FC protocol for systems with 32 Gbit/s FC ports. AFF & FAS use the same firmware image, and nearly all noticeable functionality for the end-user is the same for both storage systems. However, internally data is processed and handled differently in ONTAP. AFF systems, for example, use different Write Allocation algorithms as compared to FAS systems. Because AFF systems have faster underlying SSD drives, Inline data deduplication in ONTAP systems is nearly not noticeable (≈2% performance impact on low-end systems). [2]

ASA

All SAN Array running ONTAP, and based on AFF platform thus inherits its features & functionalities, and data internally processed and handled the same as in AFF systems. All other ONTAP-based hardware and software platforms can be referred to as Unified ONTAP meaning they can provide unified access with SAN & NAS data protocols. ONTAP architecture in ASA systems is the same as in FAS & AFF, with no changes. ASA systems using the same firmware image as AFF & FAS systems. ASA is the same as AFF, and the only difference is in the access to the storage over the network with SAN protocols: ASA provides symmetric active/active access to the block devices (LUN or NVMe namespaces), while Unified ONTAP systems continue to use ALUA and ANA for the block protocols. The original ASA platform was first announced in 2019 and re-launched in May 2023. [3]

C-Series

In February 2023, NetApp introduced a new AFF product line code named the C-Series. This platform uses quad-level cell (QLC) NAND flash and is aimed at competing with the QLC-based products already in the market from Pure Storage (specifically FlashArray//C). [4] The C-Series has higher latency than typical AFF systems, at around 2-4 milliseconds compared to 500 microseconds on AFF using triple-level cell (TLC) media. However, the aim of the platform is to provide a lower price point for customers that might otherwise not choose all-flash systems.

Storage

NetApp uses either SATA, Fibre Channel, SAS or SSD disk drives, which it groups into RAID (Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks) groups of up to 28 (26 data disks plus 2 parity disks). NetApp FAS storage systems which contain only SSD drives with installed SSD-optimized ONTAP OS called All-Flash FAS (AFF).

Disks

FAS, ASA, and AFF systems are using enterprise-level HDD and SSD (i.e. NVMe SSD) drives with two ports, each port connected to each controller in an HA pair. HDD and SSD drives can only be bought from NetApp and installed in NetApp's Disk Shelves for FAS/AFF platform. Some shelves, such as the D4246, can be upgraded from a 6 Gbit/s Shelf into a 12 Gbit/s with an IOM-12 upgrade. [5] Physical HDD and SSD drives, partitions on them, and LUNs imported from third-party arrays with FlexArray functionality considered in ONTAP as a Disk. In SDS systems like ONTAP Select & ONTAP Cloud, logical block storage like virtual disk or RDM inside ONTAP also considered as a Disk. Do not confuse the general term "disk drive" and "disk drive term used in ONTAP system" because, with ONTAP, it could be an entire physical HDD or SSD drive, a LUN, or a partition on a physical HDD or SSD drive. LUNs imported from third-party arrays with FlexArray functionality in HA pair configuration must be accessible from both nodes of the HA pair. Each disk has ownership on it to show which controller owns and serves the disk. An aggregate can include only disks owned by a single node, therefore each aggregate owned by a node and any objects on top of it, as FlexVol volumes, LUNs, File Shares are served with a single controller. Each controller can have its own disks and aggregates them where both nodes can be utilized simultaneously even though they not serving the same data.

ADP

Advanced Drive Partitioning (ADP) can be used in ONTAP-based systems depending on the platform and use-case. ADP can be used only with native disk drives from NetApp Disk shelves, FlexArray technology does not support ADP. ADP also supported with third-party drives in ONTAP Select. This technique mainly used to overcome some architectural requirements and reduce the number of disk drives in ONTAP-based systems. There are three types of ADP: Root-Data partitioning; Root-Data-Data partitioning (RD2 also known as ADPv2); Storage Pool. Root-Data partitioning can be used in FAS & AFF systems to create small root partitions on drives to use them to create system root aggregates and, therefore, not to spend entire three disk drives for that purpose. In contrast, the bigger portion of the disk drive will be used for data aggregate. Root-Data-Data partitioning is used in AFF systems only for the same reason as Root-Data partitioning with the only difference that bigger portion of the drive left after root partitioning divided equally by two additional partitions, usually, each partition assigned to one of the two controllers, therefore reducing the minimum number of drives required for an AFF system and reducing waste for expensive SSD space. Storage Pool partitioning technology used in FAS systems to equally divide each SSD drive by four pieces which later can be used for FlashPool cache acceleration, with Storage Pool only a few SSD drives can be divided by up to 4 data aggregates which will benefit from FlashCache caching technology reducing minimally required SSD drives for that technology.

NetApp RAID in ONTAP

ONTAP Storage layout: Aggregate, Plex, RAID WAFL Data Layout 2.png
ONTAP Storage layout: Aggregate, Plex, RAID

In NetApp ONTAP systems, RAID and WAFL are tightly integrated. There are several RAID types available within ONTAP-based systems:

  • RAID 4 with 1 dedicated parity disk allowing any 1 drive to fail in a RAID group.
  • RAID-DP with 2 dedicated parity disks allowing any 2 drives to fail simultaneously in a RAID group. [6]
  • RAID-TEC USpatent 7640484   with 3 dedicated parity drives, allows any 3 drives to fail simultaneously in a RAID group. [7]

RAID-DP's double parity leads to a disk loss resiliency similar to that of RAID 6. NetApp overcomes the write performance penalty of traditional RAID-4 style dedicated parity disks via WAFL and a novel use of its nonvolatile memory (NVRAM) within each storage system. [8] Each aggregate consist of one or two plexes, a plex consists of one or more RAID groups. Typical ONTAP-based storage system have only 1 plex in each aggregate, two plexes used in local SyncMirror or MetroCluster configurations. Each RAID group usually consists of disk drives of same type, speed, geometry and capacity. Though NetApp Support could allow a user to install a drive to a RAID group with same or bigger size and different type, speed and geometry for temporary basis. Ordinary data aggregates if containing more than one RAID group must have same RAID groups across the aggregate, same RAID group size is recommended, but NetApp allows to have exception in last RAID group and configure it as small as half of the RAID group size across aggregate. For example, such an aggregate might consists of 3 RAID groups: RG0:16+2, RG1:16+2, RG2:7+2. Within aggregates, ONTAP sets up flexible volumes (FlexVol) to store data that users can access.

Aggregates enabled as FlshPool and with both HDD and SSD drives called hybrid aggregates. In Flash Pool hybrid aggregates same rules applied to the hybrid aggregate as to ordinary aggregates but separately to HDD and SSD drives, thus it is allowed to have two different RAID types: only one RAID type for all HDD drives and only one RAID type for all SSD drives in a single hybrid aggregate. For example, SAS HDD with RAID-TEC (RG0:18+3, RG1:18+3) and SSD with RAID-DP (RG3:6+2). NetApp storage systems running ONTAP combine underlying RAID groups similarly to RAID 0. Also, in NetApp FAS systems with FlexArray feature, the third-party LUNs could be combined in a Plex similarly to RAID 0. NetApp storage systems running ONTAP can be deployed in MetroCluster and SyncMirror configurations, which are using technique comparably to RAID 1 with mirroring data between two plexes in an aggregate.

RAID Group Size (in number of drives) for Data Aggregates in AFF & FAS systems
Drive TypeMinimumDefaultMaximumMinimumDefaultMaximumMinimumDefaultMaximum
RAID 4RAID-DPRAID-TEC
NVMe SSD38145242872529
SSD
SAS1624
SATA or NL-SAS < 6 TB7142021
SATA or NL-SAS (6 TB, 8 TB)14
MSATA (6 TB, 8 TB)Not possible
MSATA < 6 TB20
MSATA >= 10 TBNot possible
SATA or NL-SAS >= 10 TB

Flash Pool

NetApp Flash Pool is a feature on hybrid NetApp FAS systems that allows creating hybrid aggregate with HDD drives and SSD drives in a single data aggregate. Both HDD and SSD drives form separate RAID groups. Since SSD also used to write operations, it requires RAID redundancy contrary to Flash Cache but allows to use of different RAID types for HDD and SSD; for example, it is possible to have 20 HDD 8 TB in RAID-TEC while 4 SSD in RAID-DP 960 GB in a single aggregate. SSD RAID used as cache and improved performance for read-write operations for FlexVol volumes on the aggregate where SSD added as the cache. Flash Pool cache similarly to Flash Cache has policies for reading operations but also includes write operations that could apply separately for each FlexVol volume located on the aggregate; therefore, it could be disabled on some volumes while others could benefit from SSD cache. Both FlashCache & FlashPool can be used simultaneously to cache data from a single FlexVol to enable an aggregate with Flash Pool technology minimum 4 SSD disks required (2 data, 1 parity, and 1 hot spare), it is also possible to use ADP technology to partition SSD into 4 pieces (Storage Pool) and distribute those pieces between two controllers so each controller will benefit from SSD cache when there is a small amount of SSD. Flash Pool is not available with FlexArray and is possible only with NetApp FAS native disk drives in NetApp's disk shelves.

FlexArray

FlexArray is NetApp FAS functionality allows to visualize third party storage systems and other NetApp storage systems over SAN protocols and use them instead of NetApp's disk shelves. With FlexArray functionality RAID protection must be done with third party storage array thus NetApp's RAID 4, RAID-DP and RAID-TEC not used in such configurations. One or many LUNs from third party arrays could be added to a single aggregate similarly to RAID 0. FlexArray is licensed feature.

NetApp Storage Encryption

NetApp Storage Encryption (NSE) is using specialized purpose build disks with low level Hardware-based full disk encryption (FDE/SED) feature and also supports FIPS-certified self-encrypted drives, compatible nearly with all NetApp ONTAP features and protocols but does not offer MetroCluster. NSE feature does overall nearly zero performance impact on the storage system. NSE feature similarly to NetApp Volume Encryption (NVE) in storage systems running ONTAP can store encryption key locally in Onboard Key Manager or on dedicated key manager systems using KMIP protocol like IBM Security Key Lifecycle Manager and SafeNet KeySecure. NSE is data at rest encryption which means it protects only from physical disks theft and does not give an additional level of data security protection in a normal operational and running system. NetApp has passed NIST Cryptographic Module Validation Program for its NetApp CryptoMod (TPM) with ONTAP 9.2. [9]

MetroCluster

SyncMirror replication using plexes MetroCluster replication diagram.png
SyncMirror replication using plexes

MetroCluster (MC) is free functionality for FAS and AFF systems for metro high availability with synchronous replication between two sites, this configuration requires additional equipment. Available in both modes: 7-mode (old OS) and Cluster-Mode (or cDOT - a newer version of ONTAP OS). MetroCluster in Cluster-Mode known as MCC. MetroCluster uses RAID SyncMirror (RSM) and plex technique where on one site number of disks form one or more RAID groups aggregated in a plex, while on the second site have the same number of disks with the same type and RAID configuration along with Configuration Replication Service (CRS) and NVLog replication. One plex synchronously replicates to another in a compound with non-volatile memory. Two plexes form an aggregate where data stored and in case of disaster on one site second site provide read-write access to data. MetroCluster Support FlexArray technology. MetroCluster configurations are possible only with mid-range and high-end models which provide the ability to install additional network cards required to MC to function.

MCC

MetroCluster local and DR pare memory replication in NetApp FAS/AFF systems configured as MCC MetroCluster local and DR pare memory replication.png
MetroCluster local and DR pare memory replication in NetApp FAS/AFF systems configured as MCC

With MetroCluster it is possible to have one or more storage node per site to form a cluster or Clustered MetroCluster (MCC). Remote and local HA perter node must be the same model. MCC consists of two clusters each located on one of two sites. There may be only two sites. In MCC configuration each one remote and one local storage node form Metro HA or Disaster Recovery Pare (DR Pare) across two sites while two local nodes (if there is partner) form local HA pare, thus each node synchronously replicates data in non-volatile memory two nodes: one remote and one local (if there is one). It is possible to utilize only one storage node on each site (two single node clusters) configured as MCC. 8 node MCC consists of two clusters - 4 nodes each (2 HA pair), each storage node have only one remote partner and only one local HA partner, in such a configuration each site clusters can consist out of two different storage node models. For small distances, MetroCluster requires at least one FC-VI or newer iWARP card per node. FAS and AFF systems with ONTAP software versions 9.2 and older utilize FC-VI cards and for long distances require 4 dedicated Fibre Channel switches (2 on each site) and 2 FC-SAS bridges per each disk shelf stack, thus minimum 4 total for 2 sites and minimum 2 dark fiber ISL links with optional DWDMs for long distances. Data volumes, LUNs and LIFs could online migrate across storage nodes in the cluster only within a single site where data originated from: it is not possible to migrate individual volumes, LUNs or LIFs using cluster capabilities across sites unless MetroCluster switchover operation is used which disable entire half of the cluster on a site and transparently to its clients and applications switch access to all of the data to another site.

MCC-IP

NetApp MetroCluster over IP with ADPv2 configuration NetApp MCC-IP ADPv2.png
NetApp MetroCluster over IP with ADPv2 configuration

Starting with ONTAP 9.3 MetroCluster over IP (MCC-IP) was introduced with no need for a dedicated back-end Fibre Channel switches, FC-SAS bridges and dedicated dark fiber ISL which previously were needed for a MetroCluster configuration. Initially, only A700 & FAS9000 systems supported with MCC-IP. MCC-IP available only in 4-node configurations: 2-node Highly Available system on each site with two sites total. With ONTAP 9.4, MCC-IP supports A800 system and Advanced Drive Partitioning in form of Rood-Data-Data (RD2) partitioning, also known as ADPv2. ADPv2 supported only on all-flash systems. MCC-IP configurations support single disk shelf where SSD drives partitioned in ADPv2. MetroCluster over IP require Ethernet cluster switches with installed ISL and utilize iWARP cards in each storage controller for synchronous replication. Starting with ONTAP 9.5 MCC-IP supports distance up to 700 km and starts to support SVM-DR feature, AFF A300, and FAS8200 systems.

Operating System

NetApp storage systems using proprietary OS called ONTAP (Previously Data ONTAP). The main purpose for an Operating System in a storage system is to serve data to clients in non-disruptive manner with the data protocols those clients require, and to provide additional value through features like High Availability, Disaster Recovery and data Backup. ONTAP OS provides enterprise level data management features like FlexClone, SnapMirror, SnapLock, MetroCluster etc., most of them snapshot-based WAFL File System capabilities.

WAFL

WAFL, as a robust versioning filesystem in NetApp's proprietary OS ONTAP, it provides snapshots, which allow end-users to see earlier versions of files in the file system. Snapshots appear in a hidden directory: ~snapshot for Windows (SMB) or .snapshot for Unix (NFS). Up to 1024 snapshots can be made of any traditional or flexible volume. Snapshots are read-only, although ONTAP provides additional ability to make writable "virtual clones", based at "WAFL snapshots" technique, as "FlexClones".

ONTAP implements snapshots by tracking changes to disk-blocks between snapshot operations. It can set up snapshots in seconds because it only needs to take a copy of the root inode in the filesystem. This differs from the snapshots provided by some other storage vendors in which every block of storage has to be copied, which can take many hours.

7MTT

Each NetApp FAS system running Data ONTAP 8 could switch between modes either 7-Mode or Cluster mode. In reality each mode was a separate OS with its own version of WAFL, both 7-mode and Cluster mode where shipped on a single firmware image for a FAS system till 8.3 where 7-mode was deprecated. SnapLock migration from 7-Mode to ONTAP 9 now supported with Transition Tool. It is possible to switch between modes on a FAS system but all the data on disks must be destroyed first since WAFL is not compatible and server-based application called 7MTT tool was introduced to migrate data from old 7-mode FAS system to new Cluster-Mode:

Additional to 7MTT there are two other paths to migrate data based on protocol type:

Previous limitations

Before the release of ONTAP 8, individual aggregate sizes were limited to a maximum of 2 TB for FAS250 models and 16 TB for all other models.

The limitation on aggregate size, coupled with increasing density of disk drives, served to limit the performance of the overall system. NetApp, like most storage vendors, increases overall system performance by parallelizing disk writes to many different spindles (disk drives). Large capacity drives, therefore limit the number of spindles that can be added to a single aggregate, and therefore limit the aggregate performance.

Each aggregate also incurs a storage capacity overhead of approximately 7-11%, depending on the disk type. On systems with many aggregates, this can result in lost storage capacity.

However, the overhead comes about due to additional block-checksumming on the disk level as well as usual file system overhead, similar to the overhead in file systems like NTFS or EXT3. Block checksumming helps to ensure that data errors at the disk drive level do not result in data loss.

Data ONTAP 8.0 uses a new 64-bit aggregate format, which increases the size limit of FlexVolume to approximately 100 TB (depending on storage platform) and also increases the size limit of aggregates to more than 100 TB on newer models (depending on storage platform) thus restoring the ability to configure large spindle counts to increase performance and storage efficiency. [10]

Performance

AI Performance testings (image distortion disabled):

AIResnet-50Resnet-152
4 GPU8 GPU16 GPU32 GPU4 GPU8 GPU16 GPU32 GPU
NetApp A700 Nvidia113120484870
NetApp A800 Nvidia60001120022500
AIAlexNet
4 GPU8 GPU16 GPU32 GPU
NetApp A700 Nvidia42434929
NetApp A800 Nvidia

Model history

This list may omit some models. Information taken from spec.org, netapp.com and storageperformance.org

ModelStatusReleasedCPUMain system memoryNonvolatile memoryRaw capacityBenchmarkResult
FASServer 400Discontinued1993 - 0150 MHz Intel i486? MB4 MB14 GB?
FASServer 450Discontinued1994 - 0150 MHz Intel i486? MB4 MB14 GB?
FASServer 1300Discontinued1994 - 0150 MHz Intel i486? MB4 MB14 GB?
FASServer 1400Discontinued1994 - 0150 MHz Intel i486? MB4 MB14 GB?
FASServerDiscontinued1995 - 0150 MHz Intel i486256 MB4 MB? GB640
F330Discontinued1995 - 0990 MHz Intel Pentium256 MB8 MB117 GB1310
F220Discontinued1996 - 0275 MHz Intel Pentium256 MB8 MB? GB754
F540Discontinued1996 - 06275 MHz DEC Alpha 21064A 256 MB8 MB? GB2230
F210Discontinued1997 - 0575 MHz Intel Pentium256 MB8 MB? GB1113
F230Discontinued1997 - 0590 MHz Intel Pentium256 MB8 MB? GB1610
F520Discontinued1997 - 05275 MHz DEC Alpha 21064A256 MB8 MB? GB2361
F630Discontinued1997 - 06500 MHz DEC Alpha 21164A512 MB32 MB464 GB4328
F720Discontinued1998 - 08400 MHz DEC Alpha 21164A256 MB8 MB464 GB2691
F740Discontinued1998 - 08400 MHz DEC Alpha 21164A512 MB32 MB928 GB5095
F760Discontinued1998 - 08600 MHz DEC Alpha 21164A1 GB32 MB1.39 TB7750
F85Discontinued2001 - 02256 MB64 MB648 GB
F87Discontinued2001 - 121.13 GHz Intel P3256 MB64 MB576 GB
F810Discontinued2001 - 12733 MHz Intel P3 Coppermine512 MB128 MB1.5 TB4967
F820Discontinued2000 - 12733 MHz Intel P3 Coppermine1 GB128 MB3 TB8350
F825Discontinued2002 - 08733 MHz Intel P3 Coppermine1 GB128 MB3 TB8062
F840Discontinued2000 - Aug/Dec?733 MHz Intel P3 Coppermine3 GB128 MB6 TB11873
F880Discontinued2001 - 07Dual 733 MHz Intel P3 Coppermine3 GB128 MB9 TB17531
FAS920Discontinued2004 - 052.0 GHz Intel P4 Xeon2 GB256 MB7 TB13460
FAS940Discontinued2002 - 081.8 GHz Intel P4 Xeon3 GB256 MB14 TB17419
FAS960Discontinued2002 - 08Dual 2.2 GHz Intel P4 Xeon6 GB256 MB28 TB25135
FAS980Discontinued2004 - 01Dual 2.8 GHz Intel P4 Xeon MP 2 MB L38 GB512 MB50 TB36036
FAS250EOA 11/082004 - 01600 MHz Broadcom BCM1250 dual core MIPS512 MB64 MB4 TB
FAS270EOA 11/082004 - 01650 MHz Broadcom BCM1250 dual core MIPS1 GB128 MB16 TB13620*
FAS2020EOA 8/122007 - 062.2 GHz Mobile Celeron1 GB128 MB68 TB
FAS2040EOA 8/122009 - 091.66 GHz Intel Xeon4 GB512 MB136 TB
FAS2050EOA 5/112007 - 062.2 GHz Mobile Celeron2 GB256 MB104 TB20027*
FAS2220EOA 3/152012 - 061.73 GHz Dual Core Intel Atom C35286 GB768 MB180 TB
FAS2240EOA 3/152011 - 111.73 GHz Dual Core Intel Atom C35286 GB768 MB432 TB38000
FAS2520EOA 12/172014 - 061.73 GHz Dual Core Intel Atom C352836 GB4 GB840 TB
FAS2552EOA 12/172014 - 061.73 GHz Dual Core Intel Atom C352836 GB4 GB1243 TB
FAS2554EOA 12/172014 - 061.73 GHz Dual Core Intel Atom C352836 GB4 GB1440 TB
FAS26202016 - 111 x 6-core Intel Xeon D-1528 @ 1.90 GHz64 GB (per HA)8 GB1440 TB
FAS26502016 - 111 x 6-core Intel Xeon D-1528 @ 1.90 GHz64 GB (per HA)8 GB1243 TB
FAS27202018 - 051 x 12 core 1.50 GHz Xeon D-155764 GB (per HA)8 GB
FAS27502018 - 051 x 12 core 1.50 GHz Xeon D-155764 GB (per HA)8 GB
FAS3020EOA 4/092005 - 052.8 GHz Intel Xeon2 GB512 MB84 TB34089*
FAS3040EOA 4/092007 - 02Dual 2.4 GHz AMD Opteron 2504 GB512 MB336 TB60038*
FAS3050Discontinued2005 - 05Dual 2.8 GHz Intel Xeon4 GB512 MB168 TB47927*
FAS3070EOA 4/092006 - 11Dual 1.8 GHz AMD dual core Opteron8 GB512 MB504 TB85615*
FAS3140EOA 2/122008 - 06Single 2.4 GHz AMD Opteron Dual Core 22164 GB512 MB420 TBSFS200840109*
FAS3160EOA 2/12Dual 2.6 GHz AMD Opteron Dual Core 22188 GB2 GB672 TBSFS200860409*
FAS3170EOA 2/122008 - 06Dual 2.6 GHz AMD Opteron Dual Core 221816 GB2 GB840 TBSFS97_R1137306*
FAS3210EOA 11/132010 - 11Single 2.3 GHz Intel Xeon(tm) Processor (E5220)8 GB2 GB480 TBSFS200864292
FAS3220EOA 12/142012 - 11Single 2.3 GHz Intel Xeon(tm) Quad Processor (L5410)12 GB3.2GB1.44 PB????
FAS3240EOA 11/132010 - 11Dual 2.33 GHz Intel Xeon(tm) Quad Processor (L5410)16 GB2 GB1.20 PB????
FAS3250EOA 12/142012 - 11Dual 2.33 GHz Intel Xeon(tm) Quad Processor (L5410)40 GB4 GB2.16 PBSFS2008100922
FAS3270EOA 11/132010 - 11Dual 3.0 GHz Intel Xeon(tm) Processor (E5240)40 GB4 GB1.92 PBSFS2008101183
FAS6030EOA 6/092006 - 03Dual 2.6 GHz AMD Opteron32 GB512 MB840 TBSFS97_R1100295*
FAS6040EOA 3/122007 - 122.6 GHz AMD dual core Opteron16 GB512 MB840 TB
FAS6070EOA 6/092006 - 03Quad 2.6 GHz AMD Opteron64 GB2 GB1.008 PB136048*
FAS6080EOA 3/122007 - 122 x 2.6 GHz AMD dual core Opteron 28064 GB4 GB1.176 PBSFS2008120011*
FAS6210EOA 11/132010 - 112 x 2.27 GHz Intel Xeon(tm) Processor E552048 GB8 GB2.40 PB
FAS6220EOA 3/152013 - 022 x 64-bit 4-core Intel Xeon(tm) Processor E552096 GB8 GB4.80 PB
FAS6240EOA 11/132010 - 112 x 2.53 GHz Intel Xeon(tm) Processor E554096 GB8 GB2.88 PBSFS2008190675
FAS6250EOA 3/152013 - 022 x 64-bit 4-core144 GB8 GB5.76 PB
FAS6280EOA 11/132010 - 112 x 2.93 GHz Intel Xeon(tm) Processor X5670192 GB8 GB2.88 PB
FAS6290EOA 3/152013 - 022 x 2.93 GHz Intel Xeon(tm) Processor X5670192 GB8 GB5.76 PB
FAS8020EOA 12/172014 - 031 x Intel Xeon CPU E5-2620 @ 2.00 GHz48 GB8 GB1.92 PBSFS2008110281
FAS8040EOA 12/172014 - 031 x 64-bit 8-core 2.10 GHz E5-265864 GB16 GB2.88 PB
FAS8060EOA 12/172014 - 032 x 64-bit 8-core 2.10 GHz E5-2658128 GB16 GB4.80 PB
FAS8080EXEOA 12/172014 - 062 x 64-bit 10-core 2.80 GHz E5-2680 v2256 GB32 GB8.64 PBSPC-1 IOPS685,281.71*
FAS82002016 - 111 x 16 core 1.70 GHz D-1587128 GB16 GB4.80 PB SPEC SFS2014_swbuild 4130 Mbit/s / 260 020 IOPS @2.7ms (ORT = 1.04 ms)
FAS90002016 - 112 x 18-core 2.30 GHz E5-2697 v4512 GB64 GB14.4 PB
AFF8040EOA 10/172014 - 031 x 64-bit 8-core 2.10 GHz E5-265864 GB16 GB
AFF8060EOA 11/162014 - 032 x 64-bit 8-core 2.10 GHz E5-2658128 GB16 GB
AFF8080EOA 10/172014 - 062 x 64-bit 10-core 2.80 GHz E5-2680 v2256 GB32 GB
AFF A20020171 x 6-core Intel Xeon D-1528 @ 1.90 GHz64 GB16 GB
AFF A2202018 - 051 x 12 core 1.50 GHz Xeon D-155764 GB16 GB
AFF A30020161 x 16-core Intel Xeon D-1587 @ 1.70 GHz128 GB16 GB
AFF A40020192 x 10-core Intel Xeon Silver 4210 2.2 GHz144 GB
AFF A70020162 x 18-core 2.30 GHz E5-2697 v4512 GB64 GB
AFF A700s20172 x 18-core 2.30 GHz E5-2697 v4512 GB32 GB SPC-1 2 400 059 IOPS @0.69ms
AFF A8002018 - 052 x 24-core 2.10 GHz 8160 Skylake640 GB32 GB SPC-1 v3.6 Archived 2018-07-07 at the Wayback Machine SPEC SFS2014 swbuild (3) SPEC SFS®2014_swbuild Result 2 401 171 IOPS @0.59ms with FC protocol; 2200 builds @ 0.73ms with 14227 MB/sec on 4-node cluster & FlexGroup; 4200 builds @ 0.78ms with 27165 MB/sec on 8-node cluster & FlexGroup; 6200 builds @ 2.829ms with 40117 MB/sec on NetApp 12-node AFF A800 with FlexGroup
ModelStatusReleasedCPUMain system memoryNonvolatile memoryRaw capacityBenchmarkResult

EOA = End of Availability

SPECsfs with "*" is clustered result. SPECsfs performed include SPECsfs93, SPECsfs97, SPECsfs97_R1 and SPECsfs2008. Results of different benchmark versions are not comparable.

See also

Related Research Articles

<span class="mw-page-title-main">Hard disk drive</span> Electro-mechanical data storage device

A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data when powered off. Modern HDDs are typically in the form of a small rectangular box.

RAID is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This is in contrast to the previous concept of highly reliable mainframe disk drives referred to as "single large expensive disk" (SLED).

<span class="mw-page-title-main">Network-attached storage</span> Computer data storage server

Network-attached storage (NAS) is a file-level computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. The term "NAS" can refer to both the technology and systems involved, or a specialized device built for such functionality.

NetApp, Inc. is an American data infrastructure company that provides unified data storage, integrated data services, and cloud operations (CloudOps) solutions to enterprise customers. The company is based in San Jose, California. It has ranked in the Fortune 500 from 2012 to 2021. Founded in 1992 with an initial public offering in 1995, NetApp offers cloud data services for management of applications and data both online and physically.

The Write Anywhere File Layout (WAFL) is a proprietary file system that supports large, high-performance RAID arrays, quick restarts without lengthy consistency checks in the event of a crash or power failure, and growing the filesystems size quickly. It was designed by NetApp for use in its storage appliances like NetApp FAS, AFF, Cloud Volumes ONTAP and ONTAP Select.

A hybrid drive is a logical or physical computer storage device that combines a faster storage medium such as solid-state drive (SSD) with a higher-capacity hard disk drive (HDD). The intent is adding some of the speed of SSDs to the cost-effective storage capacity of traditional HDDs. The purpose of the SSD in a hybrid drive is to act as a cache for the data stored on the HDD, improving the overall performance by keeping copies of the most frequently used data on the faster SSD drive.

The IBM SAN Volume Controller (SVC) is a block storage virtualization appliance that belongs to the IBM System Storage product family. SVC implements an indirection, or "virtualization", layer in a Fibre Channel storage area network (SAN).

<span class="mw-page-title-main">Solid-state drive</span> Computer storage device with no moving parts

A solid-state drive (SSD) is a solid-state storage device. It provides persistent data storage using no moving parts. It is sometimes called semiconductor storage device or solid-state device. It is also called solid-state disk because it is frequently interfaced to a host system in the same manner as a hard disk drive (HDD).

In computer science, memory virtualization decouples volatile random access memory (RAM) resources from individual systems in the data center, and then aggregates those resources into a virtualized memory pool available to any computer in the cluster. The memory pool is accessed by the operating system or applications running on top of the operating system. The distributed memory pool can then be utilized as a high-speed cache, a messaging layer, or a large, shared memory resource for a CPU or a GPU application.

A hybrid array is a form of hierarchical storage management that combines hard disk drives (HDDs) with solid-state drives (SSDs) for I/O speed improvements.

Texas Memory Systems, Inc. (TMS) was an American corporation that designed and manufactured solid-state disks (SSDs) and digital signal processors (DSPs). TMS was founded in 1978 and that same year introduced their first solid-state drive, followed by their first digital signal processor. In 2000 they introduced the RamSan line of SSDs. Based in Houston, Texas, they supply these two product categories to large enterprise and government organizations.

IBM Storwize systems were virtualizing RAID computer data storage systems with raw storage capacities up to 32 PB. Storwize is based on the same software as IBM SAN Volume Controller (SVC).

Flashcache is a disk cache component for the Linux kernel, initially developed by Facebook since April 2010, and released as open source in 2011. Since January 2013, there is a fork of Flashcache, named EnhanceIO and developed by sTec, Inc. Since 2015 that fork became unmaintained and it was forked again and maintained by individuals.

Shingled magnetic recording (SMR) is a magnetic storage data recording technology used in hard disk drives (HDDs) to increase storage density and overall per-drive storage capacity. Conventional hard disk drives record data by writing non-overlapping concentric magnetic tracks, while shingled recording writes new tracks that overlap part of the previously written magnetic track, leaving the previous track narrower and allowing higher track density. Thus, the tracks partially overlap similar to roof shingles. This approach was selected because, if the writing head is made too narrow, it cannot provide the very high fields required in the recording layer of the disk.

bcache is a cache in the Linux kernel's block layer, which is used for accessing secondary storage devices. It allows one or more fast storage devices, such as flash-based solid-state drives (SSDs), to act as a cache for one or more slower storage devices, such as hard disk drives (HDDs); this effectively creates hybrid volumes and provides performance improvements.

dm-cache is a component of the Linux kernel's device mapper, which is a framework for mapping block devices onto higher-level virtual block devices. It allows one or more fast storage devices, such as flash-based solid-state drives (SSDs), to act as a cache for one or more slower storage devices such as hard disk drives (HDDs); this effectively creates hybrid volumes and provides secondary storage performance improvements.

<span class="mw-page-title-main">Dell Technologies PowerFlex</span> Software-defined storage product

Dell Technologies PowerFlex, is a commercial software-defined storage product from Dell Technologies that creates a server-based storage area network (SAN) from local server storage using x86 servers. It converts this direct-attached storage into shared block storage that runs over an IP-based network.

Dell EMC Unity is one of Dell EMC's mid-range storage array product lines. It was designed from the ground up as the next-generation midrange unified storage array after the EMC VNX and VNXe series, which evolved out of the EMC Clariion SAN disk array.

ONTAP, Data ONTAP, Clustered Data ONTAP (cDOT), or Data ONTAP 7-Mode is NetApp's proprietary operating system used in storage disk arrays such as NetApp FAS and AFF, ONTAP Select, and Cloud Volumes ONTAP. With the release of version 9.0, NetApp decided to simplify the Data ONTAP name and removed the word "Data" from it, removed the 7-Mode image, therefore, ONTAP 9 is the successor of Clustered Data ONTAP 8.

References

  1. Nabrzyski, Jarek; Schopf, Jennifer M.; Węglarz, Jan (2004). Grid Resource Management: State of the Art and Future Trends. Springer. p. 342. ISBN   978-1-4020-7575-9 . Retrieved 11 June 2012.
  2. Brian Beeler (31 January 2018). "NetApp AFF A200 VMmark 3 Results Published". Storage Review. Archived from the original on 2018-06-02. Retrieved 1 June 2018.
  3. Evans, Chris (2023-05-16). "NetApp Relaunches the All-SAN Array". Architecting IT. Retrieved 2023-06-23.
  4. Evans, Chris (2023-02-07). "NetApp Announces C-Series Capacity-Focused ONTAP All-Flash System". Architecting IT. Retrieved 2023-03-09.
  5. "Comparing NetApp Disk Shelf Models: Which One Is Right for You? - Pre Rack IT". 2023-10-15. Retrieved 2024-03-27.
  6. Jay White; Chris Lueth; Jonathan Bell (1 March 2013). "TR-3298. RAID-DP: NetApp Implementation of Double - Parity RAID for Data Protection" (PDF). NetApp. Archived from the original (PDF) on 2018-01-29. Retrieved 29 January 2018.
  7. Peter Corbett; Atul Goel. "RAID Triple Parity" (PDF). NetApp. Archived from the original (PDF) on 2015-09-27. Retrieved 29 January 2018.
  8. Jay White; Carlos Alvarez (11 October 2013). "Back to Basics: RAID-DP". NetApp. Archived from the original on 2017-06-19. Retrieved 24 January 2018.
  9. "Cryptographic Module Validation Program". Computer Security Resource Center (CSRC). NIST. 4 December 2017. Archived from the original on 2018-12-14. Retrieved 14 December 2018.
  10. Boppana, Uday (March 2010). "A Thorough Introduction to 64-Bit Aggregates" (PDF). NetApp. TR-3786.