Share with PCI Express
Traditional systems currently being deployed in volume have several interconnect technologies that need to be supported. As Figure 1 shows, InfiniBand, Fibre Channel and Ethernet are a few examples of these interconnects.
Figure 1: Example of a traditional I/O system in use today
This architecture has several limitations, including:
- Existence of multiple I/O interconnect technologies
- Low utilization rates of I/O endpoints
- High power and cost of the system due to the need for multiple I/O endpoints
- I/O is fixed at the time of architecture and build… no flexibility to change later
- Management software must handle multiple I/O protocols with overhead
The architecture is completely disadvantaged by the fact that multiple I/O interconnect technologies are in use, thereby increasing latency, cost, board space, and power. The architecture would at least be partially useful if all the endpoints are being used 100%. However, more often than not, they are under-utilized. Customers pay the entire overhead for a limited use of the endpoints. The increased latency is because the PCIe interface native in the processors on these systems needs to be converted to multiple protocols. Designers can reduce their system latency by using the PCIe that is native on the processors and converge all endpoints using PCIe.
Clearly, sharing I/O endpoints (See Figure 2) is the solution to these limitations. This concept appeals to system makers because it lowers cost and power, improves performance and utilization, and simplifies design. With so many advantages, it is no surprise that many companies have tried to achieve this; the PCI-SIG, in fact, published the Multi-Root I/O Virtualization (MR-IOV) specification to achieve this goal. However, due to a combination of technical and business factors, MR-IOV as a specification hasn’t really taken off, even though it has been more than five years since it was released.
Figure 2: Example of a traditional I/O system with shared I/O