datasheets.com EBN.com EDN.com EETimes.com Embedded.com PlanetAnalog.com TechOnline.com   UBM Tech
UBM Tech

The role of IP in the new generation of data center SoCs

-November 20, 2013

Global Internet traffic is forecasted to grow threefold from 2012 to 2017 according to Cisco’s 2013 Visual Networking Index (VNI)1. This growth, mainly driven by residential, business, and mobile users, is spurring significant technology innovations for modern data centers.

The move toward cloud computing, using thin client devices to reliably deliver services for applications such as Pandora, Twitter, Facebook, and Google is creating new service business models that will cause cloud IP traffic to grow by 35 percent through 20172. These business models are enabled by a $100 million cloud services ecosystem including software-as-a-service (SaaS), platform-as-a-service (PaaS), infrastructure-as-a-service (IaaS), and others.

With the addition of the latest Internet of Things applications such as smart appliances, industrial automation, connected cars, and consumer wearable devices, the number of networked devices is expected to grow to 19 billion in 2017, according to the Cisco VNI forecasts. These industry trends put significant demands on large cloud data centers to improve efficiency and reduce complexity, space, cost, and power. To address these requirements, cloud and mega data center operators are re-architecting data center networking and compute architectures in a couple of ways. The first is to simplify the data center network through software defined networking (SDN). The second is to lower power through the use of micro servers. These new technologies are requiring architects to reconsider the design criteria for IP to develop the next generation of SoCs for data center applications.

A recent survey of large companies in North America found the average data center power usage effectiveness (PUE) is 2.92. This means for every watt of power dissipated in data center servers, 2.9 watts are dissipated in cooling and power distribution. You can see why data center operators have an incentive to reduce server power in order to significantly minimize their operating expenses.

Another challenge that mega data center operators contend with is the cost and complexity of managing the networks, especially in terms of provisioning the racks and clusters, as well as scaling network capacity. Data center operators need a quick, efficient process to provision network bandwidth based upon their business needs. You can imagine how data center network demand fluctuates during large media events such as the Super Bowl.

Previously, data center operators required over one week to install new line cards and switches to increase bandwidth within a data center cluster. Today, operators utilize on-demand provisioning, similar to server virtualization, to allocate virtual machines automatically in a matter of minutes. Automating and simplifying the management of the data center network is a major industry trend and primary benefit of SDN architectures. A software defined network decouples the control and data planes so that network intelligence and the state are logically centralized on SDN network controllers and the underlying network infrastructure is abstracted from the applications managing the network (Figure 1). New protocols such as OpenFlow, which structures communication between SDN control and data planes, is being standardized by the Open Network Foundation (ONF) to simplify the management of traffic across multiple vendors’ network devices within mega data centers.

The ONF members consist of major carrier and data center operators including Facebook, Google, Microsoft, Verizon, and Amazon, as well as system suppliers to those operators such as Cisco, Dell, Fujitsu, and HP. The participation of major semiconductor ASSP suppliers including Broadcom, Freescale, LSI, Marvell, TI, Netronome, and others within ONF are creating a new class of SoCs for data center applications.


SDN simplifies data center network management

The transition to SDN can be seen in a number of new ASSPs in development or recently introduced which support SDN Openflow such as multiport Ethernet switch ASSPs with Openflow optimized switch fabrics or communication processors that run OpenFlow SW stacks including autonomous OpenFlow accelerators. The new hybrid communication processors performing data plane classification, packet processing, traffic management and security processing are being enhanced to support SDN and OpenFlow.

Typical IP requirements for advanced communication processors include high data rate memory interfaces such as DDR4 and a large combination of I/O interfaces such as PCI Express 3.0, and 10G – 40G Ethernet ports to connect the chips to the network fabric. The introduction of new communication processors into the market is an exciting architectural transition in the data center network.

On the compute side, these architectural trends are resulting in micro servers that incorporate a new class of processor SoCs designed to reduce power dissipation. The micro server architecture is composed of multiple workload-focused server nodes in a shared chasis, which is intended to reduce data center power, cost, and space. Micro servers are designed for modest data center workloads such as web servers, offline analytics, web content delivery, and memcache.

Obviously, any approach to reduce power in compute servers will benefit mega data centers and the semiconductor industry is responding to this need by creating lower power host processors for micro servers. Reducing power for micro server host processors starts with selecting a low-power processor such as Intel’s recently released 2nd generation 64-bit Atom core. This core is used in Intel’s C2000 processor family which is well-suited to the needs lightweight scale-out workloads, such as dedicated hosting and static web serving. A number of leading semiconductor suppliers have chosen ARM’s 64-bit v8 processor core for their next-generation SoCs for low-power micro server applications.

Further power reduction can be achieved at the system level by integrating various functions that have traditionally been implemented as individual chips on the motherboard, into the processor ASSP.  Integrating processor cores, interface protocols, and a high-performance memory subsystem into a heterogeneous architecture consisting of various protocol accelerators provide the full functionality of a server on a single SoC.


Reducing data center power with micro servers


Loading comments...

Write a Comment

To comment please Log In

DesignCon App
FEATURED RESOURCES