Next-gen intelligent application adapters for 100% network programmability

Satish Ganesan, Tilera Corporation -October 16, 2013

Network Infrastructure Challenges

Charts and graphs of network demand always show it trending up and to the right, as traffic volume continues to increase exponentially with no expected pause in this growth. While increased data traffic by itself is applying pressure on networking gear and associated software service providers, the processing overhead associated with the traffic adds to these pressure points. The complexity is due to the fact that the processing needs to be managed across all OSI Layers (2-7), while traditional infrastructure was designed for Layers 2-4. Driving sophisticated L2-7 programmability to the networking gear slows down performance.

Additionally, the rate of migration of networking and security workloads to the cloud is accelerating, and data centers that were designed to handle enterprise applications and databases face a daunting task. Servers are now expected to process 10G – 100Gbps of Ethernet traffic across millions of flows, perform security, routing, load balancing and other workloads – something for which a standard server was not optimized. Server virtualization, a rapidly growing industry trend that increases the number of virtual machines (VMs) in each physical server, further exacerbates the performance crunch by consuming server resources that are already consumed handling packet flows.


The Emergence of Heterogeneous Computing

A few industry trends, specifically Network Functions Virtualization (NFV) and Software-Defined Networking (SDN), are driving the fusion of two programmable frameworks - the network and the server. The convergence of compute and flow processing places new requirements on networking gear, and drives a shift toward the use of standard servers for performing networking functions.

 

Traditional servers were not designed to handle network-related packet processing functions efficiently, especially across Layers 2-7. Such equipment was designed to handle intensive computing functionality by optimizing for high single-thread performance and processor memory-bus interactions. The interface to the network was relegated to the Network Interface Card (NIC) - a hard-coded, inflexible chipset separate from the server processors.

 

In a way, heterogeneous computing was already the established paradigm for bridging the worlds of networking and server compute with separate chipsets for handling networking and server functionality. But with the amount of cycles the server spends in actual packet processing functions in the modern network, the intersection between the programmable network and the programmable computing has become increasingly critical.

 


Traditionally, the network interface card catered to a simple function of bridging packets from an Ethernet interface into PCIe transactions to and from host memory with minimal overhead. Clearly, this is no more sufficient to handle the complexities discussed thus far.

It’s no coincidence that a number of industry players like Tilera, Mellanox and Intel, among others, are focusing their energy on this critical component.

 

From NIC to Intelligent Application Adapter

Given the importance of the processing block that connects a dynamic and high throughput network to a set of VMs running high-performance networking applications, the next-generation NIC must evolve and possess several key characteristics. It must:

 

  • Be fully L2-7 programmable
  • Conform to a standard programming environment (such as C/Linux)
  • Be open source friendly
  • Be low power
  • Perform critical packet processing functions before delivering packets to the host processor
  • Allow for offload of applications that are tightly coupled with packet flow processing
  • Scale with I/O capacity to enhance offloading and packet processing capabilities
  • Should allow for efficient transfer of flows to the host via the PCIe interface

As it relates to delivering flows to a VM-enabled server, the adapter cards should possess a few additional abilities.
  • Packets and flows need to be L2-7 classified so that each VM can be associated with a different flow, content or user. This classification can be as simple as an IP address look-up, or as complex as application ID indexing
  • Certain networking and security workloads that are closely coupled to flow and packet processing tax the server and are better processed closer to the Network I/O before they reach the VMs. Examples of such workloads include Transmission Control Protocol (TCP)/IP stack processing, Secure Sockets Layer (SSL) termination, and Deep Packet Inspection (DPI) with the evolution to even more advanced offload functions such as Hadoop acceleration and Open vSwitch (OVS) offload
  • Packet flows need to be delivered to the VMs through the PCIe interface, with Single Root I/O Virtualization (SR-IOV) – a logical function allowing seamless connectivity to many VMs over PCIe

 

Effectively, the server adapter needs to steer flows, content, applications or users to the host processor or a corresponding VM for processing in order to minimize network processing load on the server compute resources and maximize efficiency as measured by throughput and latency.  In addition, standard C/Linux software programmability is an essential feature needed to keep pace with the evolving networking and data center needs.

In other words, the NIC needs to evolve into next generation intelligent application adapters to complete the trifecta – programmable compute in the server, programmable network, and a programmable intelligent adapter.

The innovation is just beginning.

 

 

 

Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES