Design with QDR-IV for high-performance networking systems, part 1

& -October 26, 2016

Streaming video, cloud services, and mobile data have fueled the continuing growth of global network traffic. To support this growth, networking systems must provide very fast line rates and process millions of packets every second. In a networking system, packets could arrive in random order, and each packet requires several memory transactions to process. The flow of packets demands hundreds of millions of memory transactions every second to look up routes from a forwarding table or to update statistics.

Packet rates are directly proportional to the rate of random memory access. Today’s networking equipment requires memories with very high random transaction rate (RTR) performance and bandwidth to keep pace with ever increasing network traffic. Specifically, RTR measures the number of fully random memory transactions (reads or writes) that can be performed with the memory. In other words, it is the rate at which random data can be addressed (or the random address rate). This metric is independent of the number of bits being accessed during the transaction. RTR is measured in millions of transactions per second (MT/s).

Today’s high-performance DRAMs offer a low RTR relative to the rate of random traffic high performance networking systems need to be able to handle. QDR-IV SRAM was designed to provide best-in-class RTR performance to satisfy demanding network functions. Figure 1 quantifies the QDR-IV advantage in RTR performance versus other types of memory. Even when compared to the highest performing memories, QDR-IV still delivers at least 2× the RTR performance, making QDR-IV an ideal selection for high-performance networking systems needing to perform demanding operations like updating statistics, tracking flow states, scheduling packets, and performing table lookups.


Figure 1 This performance comparison shows the advantage QDR-IV has in RTR performance versus other types of memory.

Variation of QDR-IV: XP and HP

QDR-IV comes in two flavors. HP operates at a lower frequency with no banking scheme access. XP is for the highest performance applications and can be operated with a banking scheme at a higher frequency than HP.

QDR-IV operates with read latency and write latency values that are determined by the speed of operation. Table 1 defines the operational modes and frequencies supported for each option.


Table 1 Here is a comparison of XP and HP operational modes.

QDR-IV SRAM incorporates two ports designated as Port A and Port B. Because accesses to the two ports are independent, the random transaction rate is maximized for any combination of read/write accesses to the memory array. Access to each port is through a common address bus (A) running at double data rate (i.e. both edges of the clock). Addresses for Port A are latched on the rising edge of the input clock (CK), and addresses for Port B are latched on the falling edge of the CK or rising edge of the CK#. The control signals (LDA#, LDB#, RWA#, and RWB#) run at single data rate (SDR), and they determine whether to perform a read or a write operation. Both data ports (DQA and DQB) are equipped with double data rate (DDR) interfaces. The device is offered in a 2-word burst architecture. It is available in ×18 and ×36 data bus widths.

QDR-IV XP SRAM devices also have a bank-switching option. The banking scheme section describes the use of bank switching to enable operations at higher frequencies to achieve superior RTR.

Clock signal description

The CK/CK# clocks are associated with the address and control pins: An-A0, AINV, LDA#, LDB#, RWA#, and RWB#. The CK/CK# clocks are centered with address and control signals.

DKA/DKA# and DKB/DKB# are incoming clocks associated with the input write data. These clocks are center-aligned with respect to the input write data.

Based on the QDR-IV SRAM device’s data bus width configuration, Table 2 shows the relationship of input clocks with respect to the input write data. To ensure proper timing between command and data cycles, and to enable proper data bus turnaround, the DKA/DKA# and DKB/DKB# clocks must meet the CK‑to‑DKx skew (tCKDK), which is specified in the respective datasheet.


Table 2 This table shows the relationship of input clocks and write data.

The QKA/QKA# and QKB/QKB# are outgoing clocks associated with the read data. These clocks are edge-aligned with respect to the read output data. QK/QK#, the data output clock, is generated from the internal phase lock loop (PLL). It is synchronized to the CK/CK# clock and meets CK-to-QKx skew (tCKQK), which is specified in the respective datasheet.

Based on the QDR-IV SRAM device’s data bus width configuration, Table 3 shows the relationship of output clocks with respect to the read data.


Table 3 This table shows the relationship of output clocks and read data.



Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES