Design Con 2015

Testing toward secure networks

-May 01, 2008

pdf button

BOXBOROUGH, MA—When you make an online transaction or use a bank’s teller machine, you trust that the financial institution’s networks will protect your valuable information. Data centers of major financial institutions and other businesses use security platforms from Crossbeam Systems to protect their networks and your data. Financial firms such as CheckFree (now part of Fiserv) and Scottrade run security applications in their data centers on Crossbeam hardware.

Under the direction of Chet Gapinski, VP of engineering, engineers at Crossbeam test application processor modules (APMs), network processor modules (NPMs), control processor modules (CPMs), backplanes, and their communications links. Their goal: verify that the company’s systems run firewall, antivirus, and other applications while minimizing network delays and data errors. Gapinski’s team includes Mark Kline, director of hardware engineering; Matt Hamling, director of software quality assurance (SQA); Colin Ross, manager of SQA automation; and Raj Jain, SQA performance test engineer.



Figure 1.  A network processor module (NPM) provides communications interfaces to customer networks through Ethernet and internally on a backplane.


Engineers under Kline’s direction test modules and backplanes, starting with basic hardware tests on the bench using in-house diagnostics. They test each component and module to verify that the communications channels send and receive signals properly and reliably.

Figure 1 shows a block diagram for an NPM board, the module that connects the Crossbeam security platforms to a customer’s systems. In addition to containing an Ethernet switch, each NPM also contains a field-programmable gate array (FPGA) that adds a proprietary SerDes switch fabric used for communicating with other modules over a backplane. Figure 2 shows a block diagram for APM and CPM boards. (See “The Crossbeam system,” for a description of Crossbeam’s products.)



Figure 2.  Application processor modules (APMs) and control processor modules (CPMs) connect to NPMs through a switch fabric embedded in an FPGA.


On the bench, engineers perform Ethernet loopback tests using predefined patterns built into the diagnostics. They also verify communications using a built-in pseudorandom bit sequence (PRBS), checking data and clock signals with oscilloscopes from Tektronix and logic analyzers from Agilent Technologies. The proprietary communications link can pass data at distances up to 24 in. over an FR4 copper backplane.

Kline’s staff also runs a battery of power tests. DC-to-DC converters on the modules receive 48 V from the chassis and convert it to 1.8 V, 2.5 V, and 3.3 V. All modules must meet specifications for these voltages, ±5%, at specified current ratings.



Director of hardware engineering Mark Kline tests processor modules for functionality and basic communications. Photo by Mark Wilson.

Because a processor module contains a large number of ICs in ball-grid-array packages, the engineers also perform JTAG (boundary scan) testing. JTAG testing, however, has limitations. “We get between 60% and 80% JTAG coverage,” said Kline, “because Intel-based APMs don’t usually support JTAG.” Kline added, “We also use in-circuit test for these boards.”

Crossbeam’s boards have more than 70 clock signals that must be tested for frequency, amplitude, and jitter. For most signals, engineers use a 2-GHz oscilloscope, but for slower signals, they use oscilloscopes with a bandwidth of 1 GHz or 500 MHz. A memory bus on the APM and CPM board runs at 667 MHz and thus requires a 1-GHz oscilloscope. When engineers need higher bandwidths, such as for 1-Gbps and 10-Gbps data streams, they rent a high-bandwidth oscilloscope.

Bench testing goes beyond basic electrical tests. Environmental testing, performed during hardware verification, gives engineers confidence in a module’s reliability. Although Crossbeam products typically reside in temperature-controlled data centers, engineers must test modules at temperatures from 0°C to 80°C. “We design to comply with Network Equipment Building Systems [NEBS],” added Gapinski, “because some customers require carrier-class reliability.”

Vibration testing also lets engineers verify reliability, and it gives engineers a chance to weed out weaknesses early. Kline explained, “We test our board until it breaks. Then, we analyze the failures and make design changes.”

In addition to temperature and vibration testing, Crossbeam modules go through electromagnetic compliance (EMC) testing. The company uses local test labs Curtis-Straus and Intertek for environmental and EMC testing. “We’ve learned how to design for EMC,” said Kline. “By following design rules, we’re highly confident that our products will pass EMC tests the first time.”

Test automation

The company’s hardware engineers use loopback tests to verify communications at layer 1, the physical layer. Physical-layer tests verify that an Ethernet or proprietary SerDes link can reliably send and receive bits. But these tests simply send unstructured bits. Crossbeam engineers design automated tests to find the hardware’s maximum throughput.

Testing must proceed up the protocol stack, moving up to layers 2 (Data Link Layer) and 3 (Network Layer). At these layers, a system needs software. Thus, Crossbeam engineers must install system software and retest the system.



Colin Ross, manager of SQA automation, leads a team of engineers that automate system testing by writing test scripts. Photo by Mark Wilson.

At this point, Colin Ross and a team of seven engineers design and administer automated tests. “We run these tests on every new software and hardware build,” said Ross. He calls the first round of tests a “smoke test.” During a smoke test, which can run for up to 18 hours, engineers collect data on throughput and they observe packets with an Ethernet tester from Ixia. The results of the test can give engineers confidence in the system’s overall functionality.

To run the automated tests, Ross and his team have written more than 40,000 lines of Tcl code in 250 scripts that execute more than 600 test cases. They have created an application programming interface (API) that lets them issue a single command for each test. The code turns one of five “golden” systems into a traffic generator that tests new hardware and firmware revisions.

The smoke test includes regression testing whenever the company issues a new version of software. Regression tests prove that a new software version will interoperate with modules and systems that run earlier versions of the software. Before they implemented automated tests, the engineers could spend several weeks evaluating hardware and software design changes. Now, they perform the tests in just hours.

Testing takes place on any of five test beds that contain the golden systems. The traffic generator sends traffic to a system under test, which can run in either a 7-slot or a 14-slot chassis connected through an Ethernet switch (Figure 3). An automation test harness communicates to the chassis and the traffic generators. Engineers use a Web interface to select tests and enter test parameters. The test bed will then run the tests and generate reports.



Figure 3.  A traffic generator runs Ethernet traffic through modules in either of two chassis.


“The Web reporting system logs error messages,” said Hamling. “With this system, we can usually isolate the cause of an error within a few hours. Prior to using this system, we might take days to find the cause of an error.”

One of the tests, which runs for four hours, looks to uncover software errors and redundancy problems. Because the modules are hot swappable, the test checks for redundancy to ensure that a backup module will take over should a primary module fail. Other tests include traffic, connectivity, and pinging.

Currently, the five test beds operate independently from each other, but that should soon change. Ross is evaluating an Apcon Layer 1 switch that will connect the test beds. The switch will allow Ross to share resources among the five beds, and he’s excited about the possibilities this will create. He looks forward to developing new test cases and to simulating cable breaks, which he describes as “an important piece to automate.”

Coming to life

After Ross’ team certifies that a new or revised module has passed layers 1 through 3 tests, they hand it off to SQA performance engineer Raj Jain, who loads the module with a full operating system—a hardened version of Linux. At this point, the module becomes aware that it is part of a larger network.

With an operating system, a module can run security applications such as firewall and antivirus software. Whenever a new product or “first release candidate” of software needs testing, Jain will spend from eight to 12 weeks in the lab running performance tests. From the test results, he produces the performance numbers that the marketing department will publish.



SQA performance test engineer Raj Jain spends several weeks testing each new software release at protocol layers 4 through 7. Photo by Mark Wilson.

Jain and others run test cases at protocol layers 4 through 7 (Transport, Session, Presentation, and Application). At these upper layers, Jain uses a network tester from Spirent Communications. “We can use either the Ixia or the Spirent tester on all protocol layers, but we prefer to use each one where it best suits our needs,” he said. “We also like to use the same test equipment that our partners and customers use.”

Throughput is perhaps the most important test. “Improvement in throughput improves overall performance,” noted Hamling. When testing throughput, Crossbeam engineers send traffic in 64-byte packets, the smallest possible with Ethernet. “A 64-byte packet has the least amount of data compared to the header,” added Ross. “That creates the most interrupts, which places the most stress on a processor.” On average, a module handles 1.2 million Ethernet frames/s with a firewall running. (Crossbeam doesn’t supply application software such as firewalls to customers, but it does test and certify security software from its partners, the software publishers.)

In a firewall test, Jain sends real traffic through a system. That traffic includes HTTP pages, domain name server (DNS) calls, and e-mail messages. Jain’s testing revolves around monitoring performance as a firewall’s complexity builds. He measures latency as he applies an increasing number of rules and policies to a firewall. “Customers typically ask for performance measurements with one, 100, or 1000 rules,” he said.

Latency occurs when a network element delays the transfer of data. To perform a latency test, Jain will exercise the system with user datagram protocol (UDP) traffic (Ref. 1). He’s looking for latency of less than 50 µs. A latency test typically lasts for 120 s. Then, Jain will exercise the system to make and break 200,000 transmission control protocol (TCP) connections per second. He also tests to find the maximum number of TCP connections that the system can sustain while the chassis maintains 40-Gbps throughput over its SerDes backplane.



Figure 4.  A network traffic generator/analyzer tests modules by sending and receiving data at all protocol layers.


The NPM in a chassis performs load balancing across two or more APMs so that no single APM handles an undue burden of the processing (Figure 4). The APMs might all run the same application or they may run different applications. Jain will test with up to 10 blends of applications. Regardless of the number of applications, the test-system network topology remains the same.

In one test, Jain will add APMs, all running the same application software, until a 14-slot chassis has eight of them. As Jain adds APMs, system throughput should scale linearly. This kind of test lets him verify that Crossbeam’s custom drivers and FPGA code function properly.

Load balancing, however, makes consistent testing difficult, because it dynamically changes the load on each module. “We need testing that’s consistent with our partners,” noted Jain. “We need to keep our throughput numbers within 2% each time to maintain consistency. Anything outside of this is considered a failure.”

Crossbeam engineers also strive for test consistency with customers and partners through the use of standard test methods. For example, they follow RFC 2544 (Ref. 2) for system performance and RFC 3511 (Ref. 3) for firewall performance testing. They share test data with the partners who provide the security applications to Crossbeam customers, which enables consistent testing. This extensive network testing that goes into Crossbeam’s products gives engineers confidence that the modules will protect host networks.


References
  1. Ross, Keith W., and James F. Kurose, “Connectionless Transport: UDP,” gaia.cs.umass.edu/kurose/transport/UDP.html.

  2. RFC 2544, “Benchmarking Methodology for Network Interconnect Devices,” Internet Engineering Task Force, 1999. www.rfc-archive.org/getrfc.php?rfc=2544.

  3. RFC 3511, “Benchmarking Methodology for Firewall Performance,” Internet Engineering Task Force, 2003. www.rfc-archive.org/getrfc.php?rfc=3511.

For further reading

“Firewall Testing Methodology using Spirent Solutions,” white paper, Spirent Communications, 2003. www.spirentcom.com/documents/1095.pdf.
“RFC 2544 Testing of Ethernet Services in Telecom Networks,” white paper, Agilent Technologies, 2004. www.agilent.com/about/vpk/07mar2005ofc/wp_RFC2544_WP5989-1927.pdf.

Loading comments...

Write a Comment

To comment please Log In