Storage relies on DRAM

Pallab Chatterjee -January 05, 2012

As capacity and performance increase for computer storage in both enterprise and network-endpoint units, systems are increasingly reliant on DRAM in these products (Figure 1). The DRAM use has helped meet the performance requirements, observe the physical limits, and hold the power factor. Hard disks and most rotating media, such as optical disk drives, have a mechanical speed limitation that keeps their data coming off the read head at rates on the order of hundreds of megabits per second. However, most of the interfaces in use today, such as SATA, USB, and Thunderbolt, are all multiple- gigabit-per-second interfaces.

Typical state-of-the-art disk drives operate at 7200 rpm and have access times of less than 10 msec. To achieve interface rates between these access times and the 3- to 6-Gbps SATA, system engineers have been adding more cache memory to the drives themselves. The standard for years was 16 Mbytes, and now most of the systems feature 32 or 64 Mbytes. The increase in memory allows the use of inexpensive controllers that can be directly integrated into the chip set and the CPU. These integrated controllers generally support either a SATA format or a PCIe interface. For enterprise systems, stand-alone controllers still support drives using SAS (serial-attached SCSI) or SOP (SCSI over PCI), which is gaining popularity as an interface.

These systems are using low-power memories. Designers of higher-capacity drive memories are also seeking power reduction and are moving from memories requiring 2.5V to those requiring 1.8V. This shift will help to reduce power but retain the same access times. The use of DRAM is a key component for performing ECC (error-correction code) on the data both entering and exiting the system bus. Unlike solid-state disks, which have a lower bit-error rate, rotating media—even those in consumer-targeted drives—have a data-integrity requirement on the order of one part in 1014.

Self-encrypted drives are also fueling the need for increased DRAM in hard drives. These self-encrypted devices need a working area to encrypt and decrypt the message strings. The data strings can be as long as 2048 bits, and the processing of these strings can take many megabytes of DRAM. Self-encrypting drives typically have 64 Mbytes or more of cache memory to facilitate this operation and to maintain the multiple-gigabit-per-second throughput.

Solid-state disks and hybrid drives also heavily rely on DRAM. Although these drives are faster by a factor of 100 than rotating media, their read- and write-cycle times are still in microseconds. The data connections still need nanosecond- and microsecond-level data access, so designers use DRAM to manage these times. The discrepancy between nanosecond and microsecond performance is much less than that of the multiple-millisecond access of the rotating media. The DRAM for solid-state and hybrid drives is smaller, so designers can typically integrate it into the controllers.

The Thunderbolt interface operates at 10 Gbps—more than 1 billion times faster than the data rate of rotating media and 1 million times faster than solid-state drives. Its storage is thus primarily DRAM in several-gigabyte blocks. Professional systems will be implemented with ECC DRAM and support dual read and write paths, prior to the actual storage media.

Pallab Chatterjee has been an independent design consultant since 1985.

Loading comments...

Write a Comment

To comment please Log In