Design Con 2015

Share with PCI Express

Krishna Mallampati, PLX Technology -March 26, 2013

As kids we were taught that sharing is good. The semiconductor industry seems to have forgotten the spirit of that lesson, but one technology that reminds us of what our parents taught us is PCI Express (PCIe). Multiple vendors have tried to use this ubiquitous interconnecting technology to enable the sharing of I/O endpoints and, therefore, lowering system costs, power requirements, maintenance, and upgrading needs. PCIe-based sharing of I/O endpoints is expected to make a huge difference in the multi-billion dollar datacenter market.

Shared I/O
Traditional systems currently being deployed in volume have several interconnect technologies that need to be supported. As Figure 1 shows, InfiniBand, Fibre Channel and Ethernet are a few examples of these interconnects.


Figure 1: Example of a traditional I/O system in use today

This architecture has several limitations, including:
  • Existence of multiple I/O interconnect technologies
  • Low utilization rates of I/O endpoints
  • High power and cost of the system due to the need for multiple I/O endpoints
  • I/O is fixed at the time of architecture and build… no flexibility to change later
  • Management software must handle multiple I/O protocols with overhead

The architecture is completely disadvantaged by the fact that multiple I/O interconnect technologies are in use, thereby increasing latency, cost, board space, and power. The architecture would at least be partially useful if all the endpoints are being used 100%. However, more often than not, they are under-utilized. Customers pay the entire overhead for a limited use of the endpoints. The increased latency is because the PCIe interface native in the processors on these systems needs to be converted to multiple protocols. Designers can reduce their system latency by using the PCIe that is native on the processors and converge all endpoints using PCIe.

Clearly, sharing I/O endpoints (See Figure 2) is the solution to these limitations. This concept appeals to system makers because it lowers cost and power, improves performance and utilization, and simplifies design. With so many advantages, it is no surprise that many companies have tried to achieve this; the PCI-SIG, in fact, published the Multi-Root I/O Virtualization (MR-IOV) specification to achieve this goal. However, due to a combination of technical and business factors, MR-IOV as a specification hasn’t really taken off, even though it has been more than five years since it was released.


Figure 2: Example of a traditional I/O system with shared I/O
Next: Title-1

Loading comments...

Write a Comment

To comment please Log In

DesignCon App
FEATURED RESOURCES