Synchronous dynamic random-access memory

Last updated
SDRAM memory module SDRAM memory module.jpg
SDRAM memory module

Synchronous dynamic random-access memory (synchronous dynamic RAM or SDRAM) is any DRAM where the operation of its external pin interface is coordinated by an externally supplied clock signal.

Contents

DRAM integrated circuits (ICs) produced from the early 1970s to early 1990s used an asynchronous interface, in which input control signals have a direct effect on internal functions only delayed by the trip across its semiconductor pathways. SDRAM has a synchronous interface, whereby changes on control inputs are recognised after a rising edge of its clock input. In SDRAM families standardized by JEDEC, the clock signal controls the stepping of an internal finite-state machine that responds to incoming commands. These commands can be pipelined to improve performance, with previously started operations completing while new commands are received. The memory is divided into several equally sized but independent sections called banks , allowing the device to operate on a memory access command in each bank simultaneously and speed up access in an interleaved fashion. This allows SDRAMs to achieve greater concurrency and higher data transfer rates than asynchronous DRAMs could.

Pipelining means that the chip can accept a new command before it has finished processing the previous one. For a pipelined write, the write command can be immediately followed by another command without waiting for the data to be written into the memory array. For a pipelined read, the requested data appears a fixed number of clock cycles (latency) after the read command, during which additional commands can be sent.

History

Eight Hyundai SDRAM ICs on a PC100 DIMM package SDR SDRAM-1.jpg
Eight Hyundai SDRAM ICs on a PC100 DIMM package

The earliest DRAMs were often synchronized with the CPU clock (clocked) and were used with early microprocessors. In the mid-1970s, DRAMs moved to the asynchronous design, but in the 1990s returned to synchronous operation. [1] [2]

The first commercial SDRAM was the Samsung KM48SL2000 memory chip, which had a capacity of 16 Mbit. [3] It was manufactured by Samsung Electronics using a CMOS (complementary metal–oxide–semiconductor) fabrication process in 1992, [4] and mass-produced in 1993. [3] By 2000, SDRAM had replaced virtually all other types of DRAM in modern computers, because of its greater performance.

SDRAM latency is not inherently lower (faster access times) than asynchronous DRAM. Indeed, early SDRAM was somewhat slower than contemporaneous burst EDO DRAM due to the additional logic. The benefits of SDRAM's internal buffering come from its ability to interleave operations to multiple banks of memory, thereby increasing effective bandwidth.

Today, virtually all SDRAM is manufactured in compliance with standards established by JEDEC, an electronics industry association that adopts open standards to facilitate interoperability of electronic components. JEDEC formally adopted its first SDRAM standard in 1993 and subsequently adopted other SDRAM standards, including those for DDR, DDR2 and DDR3 SDRAM.

Double data rate SDRAM, known as DDR SDRAM, was first demonstrated by Samsung in 1997. [5] Samsung released the first commercial DDR SDRAM chip (64 Mbit [6] ) in June 1998, [7] [8] [9] followed soon after by Hyundai Electronics (now SK Hynix) the same year. [10]

SDRAM is also available in registered varieties, for systems that require greater scalability such as servers and workstations.

Today, the world's largest manufacturers of SDRAM include: Samsung Electronics, SK Hynix, Micron Technology, and Nanya Technology.

Timing

There are several limits on DRAM performance. Most noted is the read cycle time, the time between successive read operations to an open row. This time decreased from 10 ns for 100 MHz SDRAM (1 MHz =  Hz) to 5 ns for DDR-400, but has remained relatively unchanged through DDR2-800 and DDR3-1600 generations. However, by operating the interface circuitry at increasingly higher multiples of the fundamental read rate, the achievable bandwidth has increased rapidly.

Another limit is the CAS latency, the time between supplying a column address and receiving the corresponding data. Again, this has remained relatively constant at 10–15 ns through the last few generations of DDR SDRAM.

In operation, CAS latency is a specific number of clock cycles programmed into the SDRAM's mode register and expected by the DRAM controller. Any value may be programmed, but the SDRAM will not operate correctly if it is too low. At higher clock rates, the useful CAS latency in clock cycles naturally increases. 10–15 ns is 2–3 cycles (CL2–3) of the 200 MHz clock of DDR-400 SDRAM, CL4-6 for DDR2-800, and CL8-12 for DDR3-1600. Slower clock cycles will naturally allow lower numbers of CAS latency cycles.

SDRAM modules have their own timing specifications, which may be slower than those of the chips on the module. When 100 MHz SDRAM chips first appeared, some manufacturers sold "100 MHz" modules that could not reliably operate at that clock rate. In response, Intel published the PC100 standard, which outlines requirements and guidelines for producing a memory module that can operate reliably at 100 MHz. This standard was widely influential, and the term "PC100" quickly became a common identifier for 100 MHz SDRAM modules, and modules are now commonly designated with "PC"-prefixed numbers (PC66, PC100 or PC133 - although the actual meaning of the numbers has changed).

Control signals

All commands are timed relative to the rising edge of a clock signal. In addition to the clock, there are six control signals, mostly active low, which are sampled on the rising edge of the clock:

Command signals

Bank selection (BAn)

SDRAM devices are internally divided into either two, four or eight independent internal data banks. One to three bank address inputs (BA0, BA1 and BA2) are used to select which bank a command is directed toward.

Addressing (A10/An)

Many commands also use an address presented on the address input pins. Some commands, which either do not use an address, or present a column address, also use A10 to select variants.

Commands

The SDR SDRAM commands are defined as follows:

CSRASCASWEBAnA10AnCommand
HxxxxxxCommand inhibit (no operation)
LHHHxxxNo operation
LHHLxxxBurst terminate: stop a burst read or burst write in progress
LHLHbankLcolumnRead: read a burst of data from the currently active row
LHLHbankHcolumnRead with auto precharge: as above, and precharge (close row) when done
LHLLbankLcolumnWrite: write a burst of data to the currently active row
LHLLbankHcolumnWrite with auto precharge: as above, and precharge (close row) when done
LLHHbankrowActive (activate): open a row for read and write commands
LLHLbankLxPrecharge: deactivate (close) the current row of selected bank
LLHLxHxPrecharge all: deactivate (close) the current row of all banks
LLLHxxxAuto refresh: refresh one row of each bank, using an internal counter. All banks must be precharged.
LLLL0 0modeLoad mode register: A0 through A9 are loaded to configure the DRAM chip.
The most significant settings are CAS latency (2 or 3 cycles) and burst length (1, 2, 4 or 8 cycles)

All SDRAM generations (SDR and DDRx) use essentially the same commands, with the changes being:

Construction and operation

SDRAM memory module, zoomed SDRAM memory module, zoomed.jpg
SDRAM memory module, zoomed

As an example, a '512 MB' SDRAM DIMM (which contains 512 MB), might be made of eight or nine SDRAM chips, each containing 512 Mbit of storage, and each one contributing 8 bits to the DIMM's 64- or 72-bit width. A typical 512 Mbit SDRAM chip internally contains four independent 16 MB memory banks. Each bank is an array of 8,192 rows of 16,384 bits each. (2048 8-bit columns). A bank is either idle, active, or changing from one to the other. [6]

The active command activates an idle bank. It presents a two-bit bank address (BA0BA1) and a 13-bit row address (A0A12), and causes a read of that row into the bank's array of all 16,384 column sense amplifiers. This is also known as "opening" the row. This operation has the side effect of refreshing the dynamic (capacitive) memory storage cells of that row.

Once the row has been activated or "opened", read and write commands are possible to that row. Activation requires a minimum amount of time, called the row-to-column delay, or tRCD before reads or writes to it may occur. This time, rounded up to the next multiple of the clock period, specifies the minimum number of wait cycles between an active command, and a read or write command. During these wait cycles, additional commands may be sent to other banks; because each bank operates completely independently.

Both read and write commands require a column address. Because each chip accesses eight bits of data at a time, there are 2,048 possible column addresses thus requiring only 11 address lines (A0A9, A11).

When a read command is issued, the SDRAM will produce the corresponding output data on the DQ lines in time for the rising edge of the clock a few clock cycles later, depending on the configured CAS latency. Subsequent words of the burst will be produced in time for subsequent rising clock edges.

A write command is accompanied by the data to be written driven on to the DQ lines during the same rising clock edge. It is the duty of the memory controller to ensure that the SDRAM is not driving read data on to the DQ lines at the same time that it needs to drive write data on to those lines. This can be done by waiting until a read burst has finished, by terminating a read burst, or by using the DQM control line.

When the memory controller needs to access a different row, it must first return that bank's sense amplifiers to an idle state, ready to sense the next row. This is known as a "precharge" operation, or "closing" the row. A precharge may be commanded explicitly, or it may be performed automatically at the conclusion of a read or write operation. Again, there is a minimum time, the row precharge delay, tRP, which must elapse before that row is fully "closed" and so the bank is idle in order to receive another activate command on that bank.

Although refreshing a row is an automatic side effect of activating it, there is a minimum time for this to happen, which requires a minimum row access time tRAS delay between an active command opening a row, and the corresponding precharge command closing it. This limit is usually dwarfed by desired read and write commands to the row, so its value has little effect on typical performance.

Command interactions

The no operation command is always permitted, while the load mode register command requires that all banks be idle, and a delay afterward for the changes to take effect. The auto refresh command also requires that all banks be idle, and takes a refresh cycle time tRFC to return the chip to the idle state. (This time is usually equal to tRCD+tRP.) The only other command that is permitted on an idle bank is the active command. This takes, as mentioned above, tRCD before the row is fully open and can accept read and write commands.

When a bank is open, there are four commands permitted: read, write, burst terminate, and precharge. Read and write commands begin bursts, which can be interrupted by following commands.

Interrupting a read burst

A read, burst terminate, or precharge command may be issued at any time after a read command, and will interrupt the read burst after the configured CAS latency. So if a read command is issued on cycle 0, another read command is issued on cycle 2, and the CAS latency is 3, then the first read command will begin bursting data out during cycles 3 and 4, then the results from the second read command will appear beginning with cycle 5.

If the command issued on cycle 2 were burst terminate, or a precharge of the active bank, then no output would be generated during cycle 5.

Although the interrupting read may be to any active bank, a precharge command will only interrupt the read burst if it is to the same bank or all banks; a precharge command to a different bank will not interrupt a read burst.

Interrupting a read burst by a write command is possible, but more difficult. It can be done if the DQM signal is used to suppress output from the SDRAM so that the memory controller may drive data over the DQ lines to the SDRAM in time for the write operation. Because the effects of DQM on read data are delayed by two cycles, but the effects of DQM on write data are immediate, DQM must be raised (to mask the read data) beginning at least two cycles before write command but must be lowered for the cycle of the write command (assuming the write command is intended to have an effect).

Doing this in only two clock cycles requires careful coordination between the time the SDRAM takes to turn off its output on a clock edge and the time the data must be supplied as input to the SDRAM for the write on the following clock edge. If the clock frequency is too high to allow sufficient time, three cycles may be required.

If the read command includes auto-precharge, the precharge begins the same cycle as the interrupting command.

Burst ordering

A modern microprocessor with a cache will generally access memory in units of cache lines. To transfer a 64-byte cache line requires eight consecutive accesses to a 64-bit DIMM, which can all be triggered by a single read or write command by configuring the SDRAM chips, using the mode register, to perform eight-word bursts. A cache line fetch is typically triggered by a read from a particular address, and SDRAM allows the "critical word" of the cache line to be transferred first. ("Word" here refers to the width of the SDRAM chip or DIMM, which is 64 bits for a typical DIMM.) SDRAM chips support two possible conventions for the ordering of the remaining words in the cache line.

Bursts always access an aligned block of BL consecutive words beginning on a multiple of BL. So, for example, a four-word burst access to any column address from four to seven will return words four to seven. The ordering, however, depends on the requested address, and the configured burst type option: sequential or interleaved. Typically, a memory controller will require one or the other. When the burst length is one or two, the burst type does not matter. For a burst length of one, the requested word is the only word accessed. For a burst length of two, the requested word is accessed first, and the other word in the aligned block is accessed second. This is the following word if an even address was specified, and the previous word if an odd address was specified.

For the sequential burst mode, later words are accessed in increasing address order, wrapping back to the start of the block when the end is reached. So, for example, for a burst length of four, and a requested column address of five, the words would be accessed in the order 5-6-7-4. If the burst length were eight, the access order would be 5-6-7-0-1-2-3-4. This is done by adding a counter to the column address, and ignoring carries past the burst length. The interleaved burst mode computes the address using an exclusive or operation between the counter and the address. Using the same starting address of five, a four-word burst would return words in the order 5-4-7-6. An eight-word burst would be 5-4-7-6-1-0-3-2. [11] Although more confusing to humans, this can be easier to implement in hardware, and is preferred by Intel for its microprocessors.[ citation needed ]

If the requested column address is at the start of a block, both burst modes (sequential and interleaved) return data in the same sequential sequence 0-1-2-3-4-5-6-7. The difference only matters if fetching a cache line from memory in critical-word-first order.

Mode register

Single data rate SDRAM has a single 10-bit programmable mode register. Later double-data-rate SDRAM standards add additional mode registers, addressed using the bank address pins. For SDR SDRAM, the bank address pins and address lines A10 and above are ignored, but should be zero during a mode register write.

The bits are M9 through M0, presented on address lines A9 through A0 during a load mode register cycle.

Later (double data rate) SDRAM standards use more mode register bits, and provide additional mode registers called "extended mode registers". The register number is encoded on the bank address pins during the load mode register command. For example, DDR2 SDRAM has a 13-bit mode register, a 13-bit extended mode register No. 1 (EMR1), and a 5-bit extended mode register No. 2 (EMR2).

Auto refresh

It is possible to refresh a RAM chip by opening and closing (activating and precharging) each row in each bank. However, to simplify the memory controller, SDRAM chips support an "auto refresh" command, which performs these operations to one row in each bank simultaneously. The SDRAM also maintains an internal counter, which iterates over all possible rows. The memory controller must simply issue a sufficient number of auto refresh commands (one per row, 8192 in the example we have been using) every refresh interval (tREF = 64 ms is a common value). All banks must be idle (closed, precharged) when this command is issued.

Low power modes

As mentioned, the clock enable (CKE) input can be used to effectively stop the clock to an SDRAM. The CKE input is sampled each rising edge of the clock, and if it is low, the following rising edge of the clock is ignored for all purposes other than checking CKE. As long as CKE is low, it is permissible to change the clock rate, or even stop the clock entirely.

If CKE is lowered while the SDRAM is performing operations, it simply "freezes" in place until CKE is raised again.

If the SDRAM is idle (all banks precharged, no commands in progress) when CKE is lowered, the SDRAM automatically enters power-down mode, consuming minimal power until CKE is raised again. This must not last longer than the maximum refresh interval tREF, or memory contents may be lost. It is legal to stop the clock entirely during this time for additional power savings.

Finally, if CKE is lowered at the same time as an auto-refresh command is sent to the SDRAM, the SDRAM enters self-refresh mode. This is like power down, but the SDRAM uses an on-chip timer to generate internal refresh cycles as necessary. The clock may be stopped during this time. While self-refresh mode consumes slightly more power than power-down mode, it allows the memory controller to be disabled entirely, which commonly more than makes up the difference.

SDRAM designed for battery-powered devices offers some additional power-saving options. One is temperature-dependent refresh; an on-chip temperature sensor reduces the refresh rate at lower temperatures, rather than always running it at the worst-case rate. Another is selective refresh, which limits self-refresh to a portion of the DRAM array. The fraction which is refreshed is configured using an extended mode register. The third, implemented in Mobile DDR (LPDDR) and LPDDR2 is "deep power down" mode, which invalidates the memory and requires a full reinitialization to exit from. This is activated by sending a "burst terminate" command while lowering CKE.

DDR SDRAM prefetch architecture

DDR SDRAM employs prefetch architecture to allow quick and easy access to multiple data words located on a common physical row in the memory.

The prefetch architecture takes advantage of the specific characteristics of memory accesses to DRAM. Typical DRAM memory operations involve three phases: bitline precharge, row access, column access. Row access is the heart of a read operation, as it involves the careful sensing of the tiny signals in DRAM memory cells; it is the slowest phase of memory operation. However, once a row is read, subsequent column accesses to that same row can be very quick, as the sense amplifiers also act as latches. For reference, a row of a 1 Gbit [6] DDR3 device is 2,048 bits wide, so internally 2,048 bits are read into 2,048 separate sense amplifiers during the row access phase. Row accesses might take 50 ns, depending on the speed of the DRAM, whereas column accesses off an open row are less than 10 ns.

Traditional DRAM architectures have long supported fast column access to bits on an open row. For an 8-bit-wide memory chip with a 2,048 bit wide row, accesses to any of the 256 datawords (2048/8) on the row can be very quick, provided no intervening accesses to other rows occur.

The drawback of the older fast column access method was that a new column address had to be sent for each additional dataword on the row. The address bus had to operate at the same frequency as the data bus. Prefetch architecture simplifies this process by allowing a single address request to result in multiple data words.

In a prefetch buffer architecture, when a memory access occurs to a row the buffer grabs a set of adjacent data words on the row and reads them out ("bursts" them) in rapid-fire sequence on the IO pins, without the need for individual column address requests. This assumes the CPU wants adjacent datawords in memory, which in practice is very often the case. For instance, in DDR1, two adjacent data words will be read from each chip in the same clock cycle and placed in the pre-fetch buffer. Each word will then be transmitted on consecutive rising and falling edges of the clock cycle. Similarly, in DDR2 with a 4n pre-fetch buffer, four consecutive data words are read and placed in buffer while a clock, which is twice faster than the internal clock of DDR, transmits each of the word in consecutive rising and falling edge of the faster external clock [12]

The prefetch buffer depth can also be thought of as the ratio between the core memory frequency and the IO frequency. In an 8n prefetch architecture (such as DDR3), the IOs will operate 8 times faster than the memory core (each memory access results in a burst of 8 datawords on the IOs). Thus, a 200 MHz memory core is combined with IOs that each operate eight times faster (1600 megabits per second). If the memory has 16 IOs, the total read bandwidth would be 200 MHz x 8 datawords/access x 16 IOs = 25.6 gigabits per second (Gbit/s) or 3.2 gigabytes per second (GB/s). Modules with multiple DRAM chips can provide correspondingly higher bandwidth.

Each generation of SDRAM has a different prefetch buffer size:

Generations

SDRAM feature map
TypeFeature changes
SDRAM
  • Vcc = 3.3 V
  • Signal: LVTTL
DDR1
DDR2 Access is ≥4 words
"Burst terminate" removed
4 units used in parallel
1.25 - 5 ns per cycle
Internal operations are at 1/2 the clock rate.
Signal: SSTL_18 (1.8V) [13]
DDR3 Access is ≥8 words
Signal: SSTL_15 (1.5V) [13]
Much longer CAS latencies
DDR4 Vcc ≤ 1.2 V point-to-point (single module per channel)

SDR

The 64 MB of sound memory on the Sound Blaster X-Fi Fatality Pro sound card is built from two Micron 48LC32M8A2 SDRAM chips. They run at 133 MHz (7.5 ns clock period) and have 8-bit wide data buses. Micron 48LC32M8A2-AB.jpg
The 64 MB of sound memory on the Sound Blaster X-Fi Fatality Pro sound card is built from two Micron 48LC32M8A2 SDRAM chips. They run at 133 MHz (7.5 ns clock period) and have 8-bit wide data buses.

Originally simply known as SDRAM, single data rate SDRAM can accept one command and transfer one word of data per clock cycle. Chips are made with a variety of data bus sizes (most commonly 4, 8 or 16 bits), but chips are generally assembled into 168-pin DIMMs that read or write 64 (non-ECC) or 72 (ECC) bits at a time.

Use of the data bus is intricate and thus requires a complex DRAM controller circuit. This is because data written to the DRAM must be presented in the same cycle as the write command, but reads produce output 2 or 3 cycles after the read command. The DRAM controller must ensure that the data bus is never required for a read and a write at the same time.

Typical SDR SDRAM clock rates are 66, 100, and 133 MHz (periods of 15, 10, and 7.5 ns), respectively denoted PC66, PC100, and PC133. Clock rates up to 200 MHz were available. It operates at a voltage of 3.3 V.

This type of SDRAM is slower than the DDR variants, because only one word of data is transmitted per clock cycle (single data rate). But this type is also faster than its predecessors extended data out DRAM (EDO-RAM) and fast page mode DRAM (FPM-RAM) which took typically two or three clocks to transfer one word of data.

PC66

PC66 refers to internal removable computer memory standard defined by the JEDEC. PC66 is Synchronous DRAM operating at a clock frequency of 66.66 MHz, on a 64-bit bus, at a voltage of 3.3 V. PC66 is available in 168-pin DIMM and 144-pin SO-DIMM form factors. The theoretical bandwidth is 533 MB/s. (1 MB/s = one million bytes per second)

This standard was used by Intel Pentium and AMD K6-based PCs. It also features in the Beige Power Mac G3, early iBooks and PowerBook G3s. It is also used in many early Intel Celeron systems with a 66 MHz FSB. It was superseded by the PC100 and PC133 standards.

PC100

DIMM: 168 pins and two notches SDRAM 128MB 133MHz.jpg
DIMM: 168 pins and two notches

PC100 is a standard for internal removable computer random-access memory, defined by the JEDEC. PC100 refers to Synchronous DRAM operating at a clock frequency of 100 MHz, on a 64-bit-wide bus, at a voltage of 3.3 V. PC100 is available in 168-pin DIMM and 144-pin SO-DIMM form factors. PC100 is backward compatible with PC66 and was superseded by the PC133 standard.

A module built out of 100 MHz SDRAM chips is not necessarily capable of operating at 100 MHz. The PC100 standard specifies the capabilities of the memory module as a whole. PC100 is used in many older computers; PCs around the late 1990s were the most common computers with PC100 memory.

PC133

PC133 is a computer memory standard defined by the JEDEC. PC133 refers to SDR SDRAM operating at a clock frequency of 133 MHz, on a 64-bit-wide bus, at a voltage of 3.3 V. PC133 is available in 168-pin DIMM and 144-pin SO-DIMM form factors. PC133 is the fastest and final SDR SDRAM standard ever approved by the JEDEC, and delivers a bandwidth of 1.066 GB per second ([133.33 MHz * 64/8]=1.066 GB/s). (1 GB/s = one billion bytes per second) PC133 is backward compatible with PC100 and PC66.

DDR

While the access latency of DRAM is fundamentally limited by the DRAM array, DRAM has very high potential bandwidth because each internal read is actually a row of many thousands of bits. To make more of this bandwidth available to users, a double data rate interface was developed. This uses the same commands, accepted once per cycle, but reads or writes two words of data per clock cycle. The DDR interface accomplishes this by reading and writing data on both the rising and falling edges of the clock signal. In addition, some minor changes to the SDR interface timing were made in hindsight, and the supply voltage was reduced from 3.3 to 2.5 V. As a result, DDR SDRAM is not backwards compatible with SDR SDRAM.

DDR SDRAM (sometimes called DDR1 for greater clarity) doubles the minimum read or write unit; every access refers to at least two consecutive words.

Typical DDR SDRAM clock rates are 133, 166 and 200 MHz (7.5, 6, and 5 ns/cycle), generally described as DDR-266, DDR-333 and DDR-400 (3.75, 3, and 2.5 ns per beat). Corresponding 184-pin DIMMs are known as PC-2100, PC-2700 and PC-3200. Performance up to DDR-550 (PC-4400) is available.

DDR2

DDR2 SDRAM is very similar to DDR SDRAM, but doubles the minimum read or write unit again, to four consecutive words. The bus protocol was also simplified to allow higher performance operation. (In particular, the "burst terminate" command is deleted.) This allows the bus rate of the SDRAM to be doubled without increasing the clock rate of internal RAM operations; instead, internal operations are performed in units four times as wide as SDRAM. Also, an extra bank address pin (BA2) was added to allow eight banks on large RAM chips.

Typical DDR2 SDRAM clock rates are 200, 266, 333 or 400 MHz (periods of 5, 3.75, 3 and 2.5 ns), generally described as DDR2-400, DDR2-533, DDR2-667 and DDR2-800 (periods of 2.5, 1.875, 1.5 and 1.25 ns). Corresponding 240-pin DIMMs are known as PC2-3200 through PC2-6400. DDR2 SDRAM is now available at a clock rate of 533 MHz generally described as DDR2-1066 and the corresponding DIMMs are known as PC2-8500 (also named PC2-8600 depending on the manufacturer). Performance up to DDR2-1250 (PC2-10000) is available.

Note that because internal operations are at 1/2 the clock rate, DDR2-400 memory (internal clock rate 100 MHz) has somewhat higher latency than DDR-400 (internal clock rate 200 MHz).

DDR3

DDR3 continues the trend, doubling the minimum read or write unit to eight consecutive words. This allows another doubling of bandwidth and external bus rate without having to change the clock rate of internal operations, just the width. To maintain 800–1600 M transfers/s (both edges of a 400–800 MHz clock), the internal RAM array has to perform 100–200 M fetches per second.

Again, with every doubling, the downside is the increased latency. As with all DDR SDRAM generations, commands are still restricted to one clock edge and command latencies are given in terms of clock cycles, which are half the speed of the usually quoted transfer rate (a CAS latency of 8 with DDR3-800 is 8/(400 MHz) = 20 ns, exactly the same latency of CAS2 on PC100 SDR SDRAM).

DDR3 memory chips are being made commercially, [15] and computer systems using them were available from the second half of 2007, [16] with significant usage from 2008 onwards. [17] Initial clock rates were 400 and 533 MHz, which are described as DDR3-800 and DDR3-1066 (PC3-6400 and PC3-8500 modules), but 667 and 800 MHz, described as DDR3-1333 and DDR3-1600 (PC3-10600 and PC3-12800 modules) are now common. [18] Performance up to DDR3-2800 (PC3 22400 modules) are available. [19]

DDR4

DDR4 SDRAM is the successor to DDR3 SDRAM. It was revealed at the Intel Developer Forum in San Francisco in 2008, and was due to be released to market during 2011. The timing varied considerably during its development - it was originally expected to be released in 2012, [20] and later (during 2010) expected to be released in 2015, [21] before samples were announced in early 2011 and manufacturers began to announce that commercial production and release to market was anticipated in 2012. DDR4 reached mass market adoption around 2015, which is comparable with the approximately five years taken for DDR3 to achieve mass market transition over DDR2.

The DDR4 chips run at 1.2  V or less, [22] [23] compared to the 1.5 V of DDR3 chips, and have in excess of 2 billion data transfers per second. They were expected to be introduced at frequency rates of 2133 MHz, estimated to rise to a potential 4266 MHz [24] and lowered voltage of 1.05 V [25] by 2013.

DDR4 did not double the internal prefetch width again, but uses the same 8n prefetch as DDR3. [26] Thus, it will be necessary to interleave reads from several banks to keep the data bus busy.

In February 2009, Samsung validated 40 nm DRAM chips, considered a "significant step" towards DDR4 development [27] since, as of 2009, current DRAM chips were only beginning to migrate to a 50 nm process. [28] In January 2011, Samsung announced the completion and release for testing of a 30 nm 2048 MB [6] DDR4 DRAM module. It has a maximum bandwidth of 2.13  Gbit/s at 1.2 V, uses pseudo open drain technology and draws 40% less power than an equivalent DDR3 module. [29] [30]

DDR5

In March 2017, JEDEC announced a DDR5 standard is under development, [31] but provided no details except for the goals of doubling the bandwidth of DDR4, reducing power consumption, and publishing the standard in 2018. The standard was released on 14 July 2020. [32]

Failed successors

In addition to DDR, there were several other proposed memory technologies to succeed SDR SDRAM.

Rambus DRAM (RDRAM)

RDRAM was a proprietary technology that competed against DDR. Its relatively high price and disappointing performance (resulting from high latencies and a narrow 16-bit data channel versus DDR's 64 bit channel) caused it to lose the race to succeed SDR SDRAM.

SLDRAM boasted higher performance and competed against RDRAM. It was developed during the late 1990s by the SLDRAM Consortium. The SLDRAM Consortium consisted of about 20 major DRAM and computer industry manufacturers. (The SLDRAM Consortium became incorporated as SLDRAM Inc. and then changed its name to Advanced Memory International, Inc.) SLDRAM was an open standard and did not require licensing fees. The specifications called for a 64-bit bus running at a 200, 300 or 400 MHz clock frequency. This is achieved by all signals being on the same line and thereby avoiding the synchronization time of multiple lines. Like DDR SDRAM, SLDRAM uses a double-pumped bus, giving it an effective speed of 400, [33] 600, [34] or 800  MT/s. (1 MT/s = 1000^2 transfers per second)

SLDRAM used an 11-bit command bus (10 command bits CA9:0 plus one start-of-command FLAG line) to transmit 40-bit command packets on 4 consecutive edges of a differential command clock (CCLK/CCLK#). Unlike SDRAM, there were no per-chip select signals; each chip was assigned an ID when reset, and the command contained the ID of the chip that should process it. Data was transferred in 4- or 8-word bursts across an 18-bit (per chip) data bus, using one of two differential data clocks (DCLK0/DCLK0# and DCLK1/DCLK1#). Unlike standard SDRAM, the clock was generated by the data source (the SLDRAM chip in the case of a read operation) and transmitted in the same direction as the data, greatly reducing data skew. To avoid the need for a pause when the source of the DCLK changes, each command specified which DCLK pair it would use. [35]

The basic read/write command consisted of (beginning with CA9 of the first word):

SLDRAM Read, write or row op request packet
FLAGCA9CA8CA7CA6CA5CA4CA3CA2CA1CA0
1ID8Device IDID0CMD5
0Command codeCMD0BankRow
0Row (continued)0
0000Column

Individual devices had 8-bit IDs. The 9th bit of the ID sent in commands was used to address multiple devices. Any aligned power-of-2 sized group could be addressed. If the transmitted msbit was set, all least-significant bits up to and including the least-significant 0 bit of the transmitted address were ignored for "is this addressed to me?" purposes. (If the ID8 bit is actually considered less significant than ID0, the unicast address matching becomes a special case of this pattern.)

A read/write command had the msbit clear:

A notable omission from the specification was per-byte write enables; it was designed for systems with caches and ECC memory, which always write in multiples of a cache line.

Additional commands (with CMD5 set) opened and closed rows without a data transfer, performed refresh operations, read or wrote configuration registers, and performed other maintenance operations. Most of these commands supported an additional 4-bit sub-ID (sent as 5 bits, using the same multiple-destination encoding as the primary ID) which could be used to distinguish devices that were assigned the same primary ID because they were connected in parallel and always read/written at the same time.

There were a number of 8-bit control registers and 32-bit status registers to control various device timing parameters.

Virtual channel memory (VCM) SDRAM

VCM was a proprietary type of SDRAM that was designed by NEC, but released as an open standard with no licensing fees. It is pin-compatible with standard SDRAM, but the commands are different. The technology was a potential competitor of RDRAM because VCM was not nearly as expensive as RDRAM was. A Virtual Channel Memory (VCM) module is mechanically and electrically compatible with standard SDRAM, so support for both depends only on the capabilities of the memory controller. In the late 1990s, a number of PC northbridge chipsets (such as the popular VIA KX133 and KT133) included VCSDRAM support.

VCM inserts an SRAM cache of 16 "channel" buffers, each 1/4 row "segment" in size, between DRAM banks' sense amplifier rows and the data I/O pins. "Prefetch" and "restore" commands, unique to VCSDRAM, copy data between the DRAM's sense amplifier row and the channel buffers, while the equivalent of SDRAM's read and write commands specify a channel number to access. Reads and writes may thus be performed independent of the currently active state of the DRAM array, with the equivalent of four full DRAM rows being "open" for access at a time. This is an improvement over the two open rows possible in a standard two-bank SDRAM. (There is actually a 17th "dummy channel" used for some operations.)

To read from VCSDRAM, after the active command, a "prefetch" command is required to copy data from the sense amplifier array to the channel SDRAM. This command specifies a bank, two bits of column address (to select the segment of the row), and four bits of channel number. Once this is performed, the DRAM array may be precharged while read commands to the channel buffer continue. To write, first the data is written to a channel buffer (typically previous initialized using a Prefetch command), then a restore command, with the same parameters as the prefetch command, copies a segment of data from the channel to the sense amplifier array.

Unlike a normal SDRAM write, which must be performed to an active (open) row, the VCSDRAM bank must be precharged (closed) when the restore command is issued. An active command immediately after the restore command specifies the DRAM row completes the write to the DRAM array. There is, in addition, a 17th "dummy channel" which allows writes to the currently open row. It may not be read from, but may be prefetched to, written to, and restored to the sense amplifier array. [36] [37]

Although normally a segment is restored to the same memory address as it was prefetched from, the channel buffers may also be used for very efficient copying or clearing of large, aligned memory blocks. (The use of quarter-row segments is driven by the fact that DRAM cells are narrower than SRAM cells.) The SRAM bits are designed to be four DRAM bits wide, and are conveniently connected to one of the four DRAM bits they straddle.) Additional commands prefetch a pair of segments to a pair of channels, and an optional command combines prefetch, read, and precharge to reduce the overhead of random reads.

The above are the JEDEC-standardized commands. Earlier chips did not support the dummy channel or pair prefetch, and use a different encoding for precharge.

A 13-bit address bus, as illustrated here, is suitable for a device up to 128 Mbit [6] . It has two banks, each containing 8,192 rows and 8,192 columns. Thus, row addresses are 13 bits, segment addresses are two bits, and eight column address bits are required to select one byte from the 2,048 bits (256 bytes) in a segment.

Synchronous Graphics RAM (SGRAM)

Synchronous graphics RAM (SGRAM) is a specialized form of SDRAM for graphics adaptors. It is designed for graphics-related tasks such as texture memory and framebuffers, found on video cards. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies.

The earliest known SGRAM memory are 8 Mbit [6] chips dating back to 1994: the Hitachi HM5283206, introduced in November 1994, [38] and the NEC μPD481850, introduced in December 1994. [39] The earliest known commercial device to use SGRAM is Sony's PlayStation (PS) video game console, starting with the Japanese SCPH-5000 model released in December 1995, using the NEC μPD481850 chip. [40] [41]

Graphics double data rate SDRAM (GDDR SDRAM)

Graphics double data rate SDRAM (GDDR SDRAM) is a type of specialized DDR SDRAM designed to be used as the main memory of graphics processing units (GPUs). GDDR SDRAM is distinct from commodity types of DDR SDRAM such as DDR3, although they share some core technologies. Their primary characteristics are higher clock frequencies for both the DRAM core and I/O interface, which provides greater memory bandwidth for GPUs. As of 2023, there are eight successive generations of GDDR: GDDR2, GDDR3, GDDR4, GDDR5, GDDR5X, GDDR6, GDDR6X and GDDR6W.

GDDR was initially known as DDR SGRAM. It was commercially introduced as a 16  Mbit [6] memory chip by Samsung Electronics in 1998. [8]

High Bandwidth Memory (HBM)

High Bandwidth Memory (HBM) is a high-performance RAM interface for 3D-stacked SDRAM from Samsung, AMD and SK Hynix. It is designed to be used in conjunction with high-performance graphics accelerators and network devices. [42] The first HBM memory chip was produced by SK Hynix in 2013. [43]

Timeline

SDRAM

Synchronous dynamic random-access memory (SDRAM)
Date of introductionChip nameCapacity (bits) [6] SDRAM typeManufacturer(s) Process MOSFET AreaRef
1992KM48SL200016 Mbit SDR Samsung ? CMOS ? [4] [3]
1996MSM5718C5018 Mbit RDRAM Oki ?CMOS325 mm2 [44]
N64 RDRAM 36 MbitRDRAM NEC ?CMOS? [45]
?1024 MbitSDR Mitsubishi 150 nm CMOS? [46]
1997?1024 MbitSDR Hyundai ? SOI ? [10]
1998MD576480264 MbitRDRAMOki?CMOS325 mm2 [44]
March 1998Direct RDRAM72 MbitRDRAM Rambus ?CMOS? [47]
June 1998?64 Mbit DDR Samsung?CMOS? [8] [7] [9]
1998?64 MbitDDRHyundai?CMOS? [10]
128 MbitSDRSamsung?CMOS? [48] [7]
1999?128 MbitDDRSamsung?CMOS? [7]
1024 MbitDDRSamsung 140 nm CMOS? [46]
2000 GS eDRAM 32 Mbit eDRAM Sony, Toshiba 180 nm CMOS279 mm2 [49]
2001?288 MbitRDRAMHynix?CMOS? [50]
? DDR2 Samsung 100 nm CMOS? [9] [46]
2002?256 MbitSDRHynix?CMOS? [50]
2003 EE+GS eDRAM 32 MbiteDRAMSony, Toshiba 90 nm CMOS86 mm2 [49]
?72 Mbit DDR3 Samsung90 nmCMOS? [51]
512 MbitDDR2Hynix?CMOS? [50]
Elpida 110 nm CMOS? [52]
1024 MbitDDR2Hynix?CMOS? [50]
2004?2048 MbitDDR2Samsung80 nmCMOS? [53]
2005 EE+GS eDRAM 32 MbiteDRAMSony, Toshiba 65 nm CMOS86 mm2 [54]
Xenos eDRAM 80 MbiteDRAMNEC90 nmCMOS? [55]
?512 MbitDDR3Samsung80 nmCMOS? [9] [56]
2006?1024 MbitDDR2Hynix60 nmCMOS? [50]
2008?? LPDDR2 Hynix?
April 2008?8192 MbitDDR3Samsung50 nmCMOS? [57]
2008?16384 MbitDDR3Samsung50 nmCMOS?
2009??DDR3Hynix 44 nm CMOS? [50]
2048 MbitDDR3Hynix 40 nm
2011?16384 MbitDDR3Hynix40 nmCMOS? [43]
2048 Mbit DDR4 Hynix 30 nm CMOS? [43]
2013?? LPDDR4 Samsung 20 nm CMOS? [43]
2014?8192 MbitLPDDR4Samsung20 nmCMOS? [58]
2015?12 GbitLPDDR4Samsung20 nmCMOS? [48]
2018?8192 Mbit LPDDR5 Samsung 10 nm FinFET ? [59]
128 GbitDDR4Samsung10 nmFinFET? [60]

SGRAM and HBM

Synchronous graphics random-access memory (SGRAM) and High Bandwidth Memory (HBM)
Date of introductionChip nameCapacity (bits) [6] SDRAM typeManufacturer(s) Process MOSFET AreaRef
November 1994HM52832068 Mbit SGRAM (SDR) Hitachi 350 nm CMOS 58 mm2 [38] [61]
December 1994μPD4818508 MbitSGRAM (SDR) NEC ?CMOS280 mm2 [39] [41]
1997μPD481165016 MbitSGRAM (SDR)NEC350 nmCMOS280 mm2 [62] [63]
September 1998?16 MbitSGRAM (GDDR) Samsung ?CMOS? [8]
1999KM4132G11232 MbitSGRAM (SDR)Samsung?CMOS? [64]
2002?128 MbitSGRAM (GDDR2)Samsung?CMOS? [65]
2003?256 MbitSGRAM (GDDR2)Samsung?CMOS? [65]
SGRAM (GDDR3)
March 2005K4D553238F256 MbitSGRAM (GDDR)Samsung?CMOS77 mm2 [66]
October 2005?256 MbitSGRAM (GDDR4)Samsung?CMOS? [67]
2005?512 MbitSGRAM (GDDR4) Hynix ?CMOS? [50]
2007?1024 MbitSGRAM (GDDR5)Hynix 60 nm
2009?2048 MbitSGRAM (GDDR5)Hynix 40 nm
2010K4W1G1646G1024 MbitSGRAM (GDDR3)Samsung?CMOS100 mm2 [68]
2012?4096 MbitSGRAM (GDDR3) SK Hynix ?CMOS? [43]
2013?? HBM
March 2016MT58K256M32JA8 GbitSGRAM (GDDR5X) Micron 20 nmCMOS140 mm2 [69]
June 2016?32 Gbit HBM2 Samsung 20 nm CMOS? [70] [71]
2017?64 GbitHBM2Samsung20 nmCMOS? [70]
January 2018K4ZAF325BM16 GbitSGRAM (GDDR6)Samsung 10 nm FinFET 225 mm2 [72] [73] [74]

See also

Related Research Articles

<span class="mw-page-title-main">DDR SDRAM</span> Type of computer memory

Double Data Rate Synchronous Dynamic Random-Access Memory is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM and DDR5 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 and DDR5 memory modules will not work on DDR1-equipped motherboards, and vice versa.

<span class="mw-page-title-main">Dynamic random-access memory</span> Type of computer memory

Dynamic random-access memory is a type of random-access semiconductor memory that stores each bit of data in a memory cell, usually consisting of a tiny capacitor and a transistor, both typically based on metal–oxide–semiconductor (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory, since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence.

<span class="mw-page-title-main">DIMM</span> Computer memory module

A DIMM, or Dual In-Line Memory Module, is a popular type of memory module used in computers. It is a printed circuit board with one or both sides holding DRAM chips and pins. The vast majority of DIMMs are standardized through JEDEC standards, although there are proprietary DIMMs. DIMMs come in a variety of speeds and sizes, but generally are one of two lengths - PC which are 133.35 mm (5.25 in) and laptop (SO-DIMM) which are about half the size at 67.60 mm (2.66 in).

<span class="mw-page-title-main">DDR2 SDRAM</span> Second generation of double-data-rate synchronous dynamic random-access memory

Double Data Rate 2 Synchronous Dynamic Random-Access Memory is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) interface. It is a JEDEC standard (JESD79-2); first published in September 2003. DDR2 succeeded the original DDR SDRAM specification, and was itself succeeded by DDR3 SDRAM in 2007. DDR2 DIMMs are neither forward compatible with DDR3 nor backward compatible with DDR.

<span class="mw-page-title-main">Double data rate</span> Method of computer bus operation

In computing, double data rate (DDR) describes a computer bus that transfers data on both the rising and falling edges of the clock signal and hence doubles the memory bandwidth by transferring data twice per clock cycle. This is also known as double pumped, dual-pumped, and double transition. The term toggle mode is used in the context of NAND flash memory.

Column address strobe latency, also called CAS latency or CL, is the delay in clock cycles between the READ command and the moment data is available. In asynchronous DRAM, the interval is specified in nanoseconds. In synchronous DRAM, the interval is specified in clock cycles. Because the latency is dependent upon a number of clock ticks instead of absolute time, the actual time for an SDRAM module to respond to a CAS event might vary between uses of the same module if the clock rate differs.

XDR DRAM is a high-performance dynamic random-access memory interface. It is based on and succeeds RDRAM. Competing technologies include DDR2 and GDDR4.

In computing, serial presence detect (SPD) is a standardized way to automatically access information about a memory module. Earlier 72-pin SIMMs included five pins that provided five bits of parallel presence detect (PPD) data, but the 168-pin DIMM standard changed to a serial presence detect to encode more information.

Double Data Rate 3 Synchronous Dynamic Random-Access Memory is a type of synchronous dynamic random-access memory (SDRAM) with a high bandwidth interface, and has been in use since 2007. It is the higher-speed successor to DDR and DDR2 and predecessor to DDR4 synchronous dynamic random-access memory (SDRAM) chips. DDR3 SDRAM is neither forward nor backward compatible with any earlier type of random-access memory (RAM) because of different signaling voltages, timings, and other factors.

Memory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a processor. Memory bandwidth is usually expressed in units of bytes/second, though this can vary for systems with natural data sizes that are not a multiple of the commonly used 8-bit bytes.

<span class="mw-page-title-main">Fully Buffered DIMM</span>

A Fully Buffered DIMM (FB-DIMM) is a type of memory module used in computer systems. It is designed to improve memory performance and capacity by allowing multiple memory modules to be each connected to the memory controller using a serial interface, rather than a parallel one. Unlike the parallel bus architecture of traditional DRAMs, an FB-DIMM has a serial interface between the memory controller and the advanced memory buffer (AMB). Conventionally, data lines from the memory controller have to be connected to data lines in every DRAM module, i.e. via multidrop buses. As the memory width increases together with the access speed, the signal degrades at the interface between the bus and the device. This limits the speed and memory density, so FB-DIMMs take a different approach to solve the problem.

GDDR4 SDRAM, an abbreviation for Graphics Double Data Rate 4 Synchronous Dynamic Random-Access Memory, is a type of graphics card memory (SGRAM) specified by the JEDEC Semiconductor Memory Standard. It is a rival medium to Rambus's XDR DRAM. GDDR4 is based on DDR3 SDRAM technology and was intended to replace the DDR2-based GDDR3, but it ended up being replaced by GDDR5 within a year.

A memory controller, also known as memory chip controller (MCC) or a memory controller unit (MCU), is a digital circuit that manages the flow of data going to and from a computer's main memory. When a memory controller is integrated into another chip, such as being placed on the same die or as an integral part of a microprocessor, it is usually called an integrated memory controller (IMC).

<span class="mw-page-title-main">GDDR5 SDRAM</span> Type of high performance DRAM graphics card memory

Graphics Double Data Rate 5 Synchronous Dynamic Random-Access Memory is a type of synchronous graphics random-access memory (SGRAM) with a high bandwidth interface designed for use in graphics cards, game consoles, and high-performance computing. It is a type of GDDR SDRAM.

<span class="mw-page-title-main">Memory module</span>

In computing, a memory module or RAM stick is a printed circuit board on which memory integrated circuits are mounted.

Memory timings or RAM timings describe the timing information of a memory module or the onboard LPDDRx. Due to the inherent qualities of VLSI and microelectronics, memory chips require time to fully execute commands. Executing commands too quickly will result in data corruption and results in system instability. With appropriate time between commands, memory modules/chips can be given the opportunity to fully switch transistors, charge capacitors and correctly signal back information to the memory controller. Because system performance depends on how fast memory can be used, this timing directly affects the performance of the system.

Double Data Rate 4 Synchronous Dynamic Random-Access Memory is a type of synchronous dynamic random-access memory with a high bandwidth interface.

<span class="mw-page-title-main">LPDDR</span> Computer hardware

Low-Power Double Data Rate (LPDDR), also known as LPDDR SDRAM, is a type of synchronous dynamic random-access memory that consumes less power and is targeted for mobile computers and devices such as mobile phones. Older variants are also known as Mobile DDR, and abbreviated as mDDR.

<span class="mw-page-title-main">GDDR3 SDRAM</span> Type of graphics card memory

GDDR3 SDRAM is a type of DDR SDRAM specialized for graphics processing units (GPUs) offering less access latency and greater device bandwidths. Its specification was developed by ATI Technologies in collaboration with DRAM vendors including Elpida Memory, Hynix Semiconductor, Infineon and Micron. It was later adopted as a JEDEC standard.

<span class="mw-page-title-main">DDR5 SDRAM</span> Fifth generation of double-data-rate synchronous dynamic random-access memory

Double Data Rate 5 Synchronous Dynamic Random-Access Memory is the latest type of synchronous dynamic random-access memory. Compared to its predecessor DDR4 SDRAM, DDR5 was planned to reduce power consumption, while doubling bandwidth. The standard, originally targeted for 2018, was released on July 14, 2020.

References

  1. P. Darche (2020). Microprocessor: Prolegomenes - Calculation and Storage Functions - Calculation Models and Computer. p. 59. ISBN   9781786305633.
  2. B. Jacob; S. W. Ng; D. T. Wang (2008). Memory Systems: Cache, DRAM, Disk. Morgan Kaufmann. p. 324. ISBN   9780080553849.
  3. 1 2 3 "Electronic Design". Electronic Design . 41 (15–21). Hayden Publishing Company. 1993. The first commercial synchronous DRAM, the Samsung 16-Mbit KM48SL2000, employs a single-bank architecture that lets system designers easily transition from asynchronous to synchronous systems.
  4. 1 2 "KM48SL2000-7 Datasheet". Samsung. August 1992. Retrieved 19 June 2019.
  5. "Samsung 30 nm Green PC3-12800 Low Profile 1.35 V DDR3 Review". TechPowerUp. March 8, 2012. Retrieved 25 June 2019.
  6. 1 2 3 4 5 6 7 8 9 10 Here, K, M, G, or T refer to the binary prefixes based on powers of 1024.
  7. 1 2 3 4 "Samsung Electronics Develops First 128Mb SDRAM with DDR/SDR Manufacturing Option". Samsung Electronics . Samsung. 10 February 1999. Retrieved 23 June 2019.
  8. 1 2 3 4 "Samsung Electronics Comes Out with Super-Fast 16M DDR SGRAMs". Samsung Electronics . Samsung. 17 September 1998. Retrieved 23 June 2019.
  9. 1 2 3 4 "Samsung Demonstrates World's First DDR 3 Memory Prototype". Phys.org . 17 February 2005. Retrieved 23 June 2019.
  10. 1 2 3 "History: 1990s". az5miao. Retrieved 4 April 2022.
  11. "Nanya 256 Mb DDR SDRAM Datasheet" (PDF). intel.com. April 2003. Retrieved 2015-08-02.
  12. Micron, General DDR SDRAM Functionality, Technical Note, TN-46-05
  13. 1 2 3 Graham, Allan (2007-01-12). "The outlook for DRAMs in consumer electronics". EDN. AspenCore Media. Retrieved 2021-04-13.
  14. "SDRAM Part Catalog". 070928 micron.com
  15. "What is DDR memory?".
  16. Thomas Soderstrom (June 5, 2007). "Pipe Dreams: Six P35-DDR3 Motherboards Compared". Tom's Hardware.
  17. "AMD to Adopt DDR3 in Three Years". 28 November 2005.
  18. Wesly Fink (July 20, 2007). "Super Talent & TEAM: DDR3-1600 Is Here!". Anandtech.
  19. Jennifer Johnson (24 April 2012). "G.SKILL Announces DDR3 Memory Kit For Ivy Bridge".
  20. DDR4 PDF page 23
  21. "DDR4 not expected until 2015". semiaccurate.com. 16 August 2010.
  22. "IDF: "DDR3 won't catch up with DDR2 during 2009"". Alphr.
  23. "heise online - IT-News, Nachrichten und Hintergründe". heise online.
  24. "Next-Generation DDR4 Memory to Reach 4.266GHz - Report". Xbitlabs.com. August 16, 2010. Archived from the original on December 19, 2010. Retrieved 2011-01-03.
  25. "IDF: DDR4 memory targeted for 2012" (in German). hardware-infos.com. Archived from the original on 2009-07-13. Retrieved 2009-06-16.
  26. "JEDEC Announces Key Attributes of Upcoming DDR4 Standard" (Press release). JEDEC. 2011-08-22. Retrieved 2011-01-06.
  27. Gruener, Wolfgang (February 4, 2009). "Samsung hints to DDR4 with first validated 40 nm DRAM". tgdaily.com. Archived from the original on May 24, 2009. Retrieved 2009-06-16.
  28. Jansen, Ng (January 20, 2009). "DDR3 Will be Cheaper, Faster in 2009". dailytech.com. Archived from the original on June 22, 2009. Retrieved 2009-06-17.
  29. "Samsung Develops Industry's First DDR4 DRAM, Using 30nm Class Technology". Samsung. 2011-01-04. Retrieved 2011-03-13.
  30. "Samsung develops DDR4 memory, up to 40% more efficient". TechSpot.
  31. "JEDEC DDR5 & NVDIMM-P Standards Under Development" (Press release). JEDEC. 30 March 2017.
  32. Smith, Ryan (2020-07-14). "DDR5 Memory Specification Released: Setting the Stage for DDR5-6400 And Beyond". AnandTech. Retrieved 2020-07-15.
  33. Dean Kent (1998-10-24), RAM Guide: SLDRAM, Tom's Hardware, retrieved 2011-01-01
  34. Hyundai Electronics (1997-12-20), HYSL8M18D600A 600 Mb/s/pin 8M x 18 SLDRAM (PDF) (data sheet), archived from the original (PDF) on 2012-04-26, retrieved 2011-12-27
  35. SLDRAM Inc. (1998-07-09), SLD4M18DR400 400 Mb/s/pin 4M x 18 SLDRAM (PDF) (data sheet), pp. 32–33, archived from the original (PDF) on 2012-04-26, retrieved 2011-12-27
  36. Siemens Semiconductor Group, HYB39V64x0yT 64MBit Virtual Channel SDRAM (PDF), archived (PDF) from the original on 2018-11-12
  37. NEC (1999), 128M-BIT VirtualChannel SDRAM preliminary datasheet (PDF), archived (PDF) from the original on 2013-12-03, retrieved 2012-07-17
  38. 1 2 HM5283206 Datasheet. Hitachi. 11 November 1994. Retrieved 10 July 2019.
  39. 1 2 μPD481850 Datasheet. NEC. 6 December 1994. Retrieved 10 July 2019.
  40. "PU-18". PSXDEV. Retrieved 10 July 2019.
  41. 1 2 NEC Application Specific Memory. NEC. Fall 1995. p.  359 . Retrieved 21 June 2019.
  42. ISSCC 2014 Trends Archived 2015-02-06 at the Wayback Machine page 118 "High-Bandwidth DRAM"
  43. 1 2 3 4 5 "History: 2010s". az5miao. Retrieved 4 April 2022.
  44. 1 2 "MSM5718C50/MD5764802" (PDF). Oki Semiconductor. February 1999. Archived (PDF) from the original on 2019-06-21. Retrieved 21 June 2019.
  45. "Ultra 64 Tech Specs". Next Generation . No. 14. Imagine Media. February 1996. p. 40.
  46. 1 2 3 "Memory". STOL (Semiconductor Technology Online). Retrieved 25 June 2019.
  47. "Direct RDRAM" (PDF). Rambus. 12 March 1998. Archived (PDF) from the original on 2019-06-21. Retrieved 21 June 2019.
  48. 1 2 "History". Samsung Electronics . Samsung . Retrieved 19 June 2019.
  49. 1 2 "EMOTION ENGINE AND GRAPHICS SYNTHESIZER USED IN THE CORE OF PLAYSTATION BECOME ONE CHIP" (PDF). Sony. April 21, 2003. Archived (PDF) from the original on 2017-02-27. Retrieved 26 June 2019.
  50. 1 2 3 4 5 6 7 "History: 2000s". az5miao. Retrieved 4 April 2022.
  51. "Samsung Develops the Industry's Fastest DDR3 SRAM for High Performance EDP and Network Applications". Samsung Semiconductor . Samsung. 29 January 2003. Retrieved 25 June 2019.
  52. "Elpida ships 2GB DDR2 modules". The Inquirer . 4 November 2003. Archived from the original on July 10, 2019. Retrieved 25 June 2019.{{cite news}}: CS1 maint: unfit URL (link)
  53. "Samsung Shows Industry's First 2-Gigabit DDR2 SDRAM". Samsung Semiconductor . Samsung. 20 September 2004. Retrieved 25 June 2019.
  54. "ソニー、65nm対応の半導体設備を導入。3年間で2,000億円の投資". pc.watch.impress.co.jp. Archived from the original on 2016-08-13.
  55. ATI engineers by way of Beyond 3D's Dave Baumann
  56. "Our Proud Heritage from 2000 to 2009". Samsung Semiconductor . Samsung . Retrieved 25 June 2019.
  57. "Samsung 50nm 2GB DDR3 chips are industry's smallest". SlashGear. 29 September 2008. Retrieved 25 June 2019.
  58. "Our Proud Heritage from 2010 to Now". Samsung Semiconductor . Samsung . Retrieved 25 June 2019.
  59. "Samsung Electronics Announces Industry's First 8Gb LPDDR5 DRAM for 5G and AI-powered Mobile Applications". Samsung. July 17, 2018. Retrieved 8 July 2019.
  60. "Samsung Unleashes a Roomy DDR4 256GB RAM". Tom's Hardware . 6 September 2018. Archived from the original on June 21, 2019. Retrieved 4 April 2022.
  61. "Hitachi HM5283206FP10 8Mbit SGRAM" (PDF). Smithsonian Institution . Archived (PDF) from the original on 2003-07-16. Retrieved 10 July 2019.
  62. UPD4811650 Datasheet. NEC. December 1997. Retrieved 10 July 2019.
  63. Takeuchi, Kei (1998). "16M-BIT SYNCHRONOUS GRAPHICS RAM: μPD4811650". NEC Device Technology International (48). Retrieved 10 July 2019.
  64. "Samsung Announces the World's First 222 MHz 32Mbit SGRAM for 3D Graphics and Networking Applications". Samsung Semiconductor . Samsung. 12 July 1999. Retrieved 10 July 2019.
  65. 1 2 "Samsung Electronics Announces JEDEC-Compliant 256Mb GDDR2 for 3D Graphics". Samsung Electronics . Samsung. 28 August 2003. Retrieved 26 June 2019.
  66. "K4D553238F Datasheet". Samsung Electronics. March 2005. Retrieved 10 July 2019.
  67. "Samsung Electronics Develops Industry's First Ultra-Fast GDDR4 Graphics DRAM". Samsung Semiconductor . Samsung. October 26, 2005. Retrieved 8 July 2019.
  68. "K4W1G1646G-BC08 Datasheet" (PDF). Samsung Electronics. November 2010. Archived (PDF) from the original on 2022-01-24. Retrieved 10 July 2019.
  69. Shilov, Anton (March 29, 2016). "Micron Begins to Sample GDDR5X Memory, Unveils Specs of Chips". AnandTech . Retrieved 16 July 2019.
  70. 1 2 Shilov, Anton (July 19, 2017). "Samsung Increases Production Volumes of 8 GB HBM2 Chips Due to Growing Demand". AnandTech . Retrieved 29 June 2019.
  71. "HBM". Samsung Semiconductor . Samsung . Retrieved 16 July 2019.
  72. "Samsung Electronics Starts Producing Industry's First 16-Gigabit GDDR6 for Advanced Graphics Systems". Samsung. January 18, 2018. Retrieved 15 July 2019.
  73. Killian, Zak (18 January 2018). "Samsung fires up its foundries for mass production of GDDR6 memory". Tech Report. Retrieved 18 January 2018.
  74. "Samsung Begins Producing The Fastest GDDR6 Memory In The World". Wccftech. 18 January 2018. Retrieved 16 July 2019.