Power and wireless options extend Ethernet's reach
By David Marsh, Contributing Technical Editor - November 11, 2004
Part one of this two-part series examined wired Ethernet's migration from 10-Mbps to 10-Gbps technologies (EDN, October 14, 2004, pg 63). Part Two wraps up our review of wired Ethernet.
Continuous development over more than 20 years enables wired Ethernet to achieve the almost inconceivable result of increasing its speed by three orders of magnitude while continuing to use broadly similar copper media (Reference 1). But as if this feat weren't enough, recent developments furnish two complementary standards that enable entirely new applications. The first is POE (power over Ethernet), which the IEEE in July 2003 published as the 802.3af standard; the second encompasses the various wireless-access technologies that appear within the 802.11, 15, 16, and 20 series of standards.
First, PowerDsine in 1998 conceived POE and quickly added 3Com, Intel, Mitel, National Semiconductor, and Nortel Networks to the technology's initial promoters. POE's prime motivation stems from the desire to standardize connections to portable and remote devices that dispense with the need for ac line power. The obvious analogy is the phone system, in which the handset takes its power from the incoming signal lines. In its current 802.3af guise, POE delivers about 15W of 48V-dc power over the same twisted-pair-cabling infrastructure that most Ethernet connections employ and is usable in networks of 10 Mbps to 1 Gbps. This amount of power is sufficient to feed a multiplicity of network-orientated devices, from access terminals to wireless links. One early success comes from Cisco Systems, whose VOIP (voice-over-Internet Protocol) 79xx series has been a hot seller and enables new opportunities, such as desktop data delivery to users without PCs.
Crucially, the predominance of twisted-pair cabling throughout enterprises allows POE to deliver dc power to locations where it would be expensive or impractical to route ac line wiring. POE thus enables a host of applications, from digital cameras to security systems to smart sensors. Although the initial market targeted business applications, Igal Rotem, chief executive officer at PowerDsine, notes that POE is fast becoming a must-have feature in relatively low-cost switches and hubs from vendors such as D-Link and Netgear: 'A small premium that's becoming ever more affordable offers tremendous improvements in convenience and ease of use.' And, according to Ian Moulding, marketing manager for power management at Philips Semiconductors, POE's ability to supply a range of diverse devices makes ac wall adapters redundant in many of their traditional applications. Conversely, the opportunities for dc/dc converter vendors and the semiconductor companies are huge (see sidebar 'POE stimulates semis sales bonanza').
At first glance, POE is a deceptively simple technology. The simplest arrangement uses the spare pairs in a standard 10/100BaseT network (Figure 1a). Alternatively, modifying the traditional signal-coupling transformer to a centre-tapped design allows power delivery by biasing each signal line, permitting operation within 1000BaseT networks that use all four twisted pairs for data exchanges (Figure 1b). To enable use in legacy installations, a midspan hub adds POE capabilities to existing hubs and switches by injecting power into the twisted-pair cabling; an uninterruptible power supply can safeguard the system in case of ac-line-power outages (Figure 2). Notice that the specification does not allow concurrent power delivery over all four twisted pairs, so devices that can deliver power over alternative routes must enable only one route per port at any time.
To prevent the PSE (power-sourcing equipment) from damaging equipment that's not POE-compatible, a discovery process runs at power-on, as well as every time the user plugs a device into a POE port. Similarly, the PSE disconnects ports within a few hundred milliseconds of a user's removing a device. According to PowerDsine's Rotem, the discovery process was one of POE's major challenges to the specification's designers. The approach that they eventually adopted adds a resistor of nominally 25 kÙ in parallel with a 100-nF capacitor across the power-supply inputs of any POE-compatible powered device. In essence, the process works by looking for a 25-kÙ signature impedance by applying a current-limited test signal to the cable and looking for a return voltage (Reference 1). The power source applies the full 48V only if both the resistance and the capacitance tests pass and then applies a 350-mA maximum current limit to guard against faulty cabling or end-user equipment.
Each powered device includes at least one dc/dc converter that transforms the input voltage to levels that suit its construction and also provides a 1500V isolation barrier. To stay alive, the device under power must draw a minimum current of 10 mA, thereby allowing detection logic within the power source to remove power to a port when the user disconnects the equipment. Optionally, a powered device can signal its maximum consumption, thereby enabling active power management. The specification currently specifies a 15.4W default with three additional classes that span 4 to 15.4W, reserving a fifth class for future use. The possibility also exists for the power source to communicate with its clients using a protocol, such as SNMP (simple network-management protocol), for control such as powering down devices during out-of-work hours. Note that the specification allows designers considerably greater application flexibility than this synopsis suggests; as for most other 802-series standards, you can download the full specification for free from http://standards.ieee.org/getieee802/.
For the future, PowerDsine's Rotem reports that discussions are already under way to double POE's output power level to 30W. This amount of power, he says, will enable yet more applications by powering devices such as cameras and videophones that use electric motors for zooming, as well as small laptops and storage devices. It's then likely to be essential to implement power-management strategies even in smaller hubs, as the manufacturer's desire to minimize power-supply size and cost increasingly conflicts with the overall power budget. In the meantime, the cost per port will continue to tumble as the technology's acceptance matures.
FCC kick-starts wireless Ethernet
The runaway success of mobile and cordless telephony over the past few years raises the bar for user expectations: Users now consider wireless operation routine and expect affordable and reliable products, such as the Wi-Fi-enabled laptop PCs that accompany today's business travellers. Roaming is now possible within the confines of an environment such as a hospital, where immediate access to patient records from arbitrary locations around the campus may prove crucial, and at so-called wireless hot spots, which adorn contemporary airport lounges. Home and small-business users, too, love the convenience of wireless networks, which free them from the need to route twisted-pair wiring around their premises. This reason alone helps explain why domestic and SOHO (small-office/home-office) networking accounts for most of today's wireless-Ethernet-equipment sales.
Although its move was rare for a regulatory body, it's largely thanks to the FCC (Federal Communications Commission) that wireless Ethernet came into existence. In 1985, the US telecommunications regulator released three areas of spectrum from within the ISM (industrial, scientific, and medical) bands for unlicensed operation. The frequencies that the agency made available centre on 900 MHz, 2.4 GHz, and 5.8 GHz—then popularly known as the 'garbage bands,' because they were reserved for applications such as microwave ovens. Today, 2.4 and 5.8 GHz are unlicensed virtually everywhere, and the 900-MHz band accommodates much of Europe's GSM (global-system-for-mobile-communications) telephony. One precondition for deregulation required that users circumvent interference with existing equipment, effectively demanding spread-spectrum technology for communications use. The next milestone came in 1988, when NCR considered using wireless links to network its cash registers. Engineers Bruce Tuch of Bell Labs and Victor Hayes of NCR subsequently began the IEEE's 802.11-standardization effort. But it took until 1997 for the mandatory minimum 75% of committee members to reach agreement. This fledgling specification proposed a 1- and 2-Mbps, half-duplex system using direct-sequence or frequency-hopping spread-spectrum transmissions.
The IEEE ratified the new standard in December 1999 to support two RF variants: 802.11a, which operates in the 5.8-GHz band, and 802.11b, which operates at 2.4 to 2.483 GHz. These links respectively achieve 54 and 11 Mbps. Because the lower frequency option is technically less challenging, most equipment makers adopted it, but the specification's complexity led to severe interoperability issues. As a result, 3Com, Aironet (now part of Cisco), Intersil (802.11b's prime developers), Lucent Technologies (now as its Agere Systems spin-off), Nokia, and Symbol Technologies formed WECA (Wireless Ethernet Compatibility Alliance). WECA, which rebranded itself as the Wi-Fi Alliance, aims to ensure true interworking between every vendors' products. Its assurance comes from the Wi-Fi mark that compatible equipment bears. Today, virtually all equipment is dual-compatible with the newer 802.11g specification, which takes advantage of 802.11a's OFDM (orthogonal frequency-division multiplexing) to reach 54 Mbps in the 2.4-GHz band. Some equipment is also triple-compatible with 802.11a, which can avoid noise problems in crowded enterprise environments by working at 5.8 GHz.
Managing transitory relationships
At a superficial level, 802.11 functions as a wireless replacement for Ethernet's traditional physical and link layers. Architecturally, the fundamental building block in an 802.11 network is the BSS (basic-service set), which comprises the area in which compliant devices can communicate. The simplest BSS comprises two Wi-Fi-enabled devices that—subject to application-software compatibility—can communicate peer-to-peer whenever they are within range of one another. This transient capability leads to the terms 'ad hoc network,' or 'IBSS' (independent basic-service set). Infrastructure networks employ an access point that operates as both a base station and a gateway into other networks, such as a wired Ethernet or a broadband datacommunications link. Therefore, you can extend a wired Ethernet and provide roaming capability for as many as 127 more devices simply by adding a wireless-access point. Depending on the nature of local propagation and the equipment that you select, wireless range can extend over several hundred meters.
The 802.11 specifications support nine services, only three of which involve data transport; the remainder are management services that track mobile stations and enable appropriate frame delivery (Table 1). Of these services, all 802.11 devices must implement the core station services that comprise authentication, deauthentication, and delivery; privacy is optional. The remaining distribution services connect access points into the wired infrastructure and manage associations between connected mobiles. To allow handovers between access points, mobiles continuously monitor signal strength and quality from access points within an ESS (extended-service set). This construct comprises all the access points that the network administrator assigns to service an area, effectively concatenating a number of BSS areas. Within this single ESS, the protocols provide a seamless handover. A mobile can also hand over to another ESS, but this transition relies on the mobile's reassociation function and is not seamless. That is, corruption is likely for any data transport running immediately before this event. Interestingly, the presence of an access point doesn't preclude independent communications between devices within the access point's coverage area.
Naturally, the wireless medium necessitates significant changes from the wired Ethernet model. An overview of 802.11's main differences helps highlight some of the features that wireless operation demands. At the network level—and like wired Ethernet—802.11 defines only half of the data-link layer's functions; the other part appears in the logical-link layer that 802.2 defines. On the radio side, data rates greater than 2 Mbps mandate DSSS (direct-sequence spread-spectrum) technology, in which a redundant bit pattern, or 'chip code,' modulates each bit in the data stream to expand the transmission signal, easing data recovery at the receiver. (References 2and3 list two new titles that provide excellent coverage of radio interfaces for non-RF specialists, including implementation considerations.) The DSSS physical-layer frame format comprises a 16-byte preamble followed by a 2-byte start-of-frame delimiter. There follows a single byte that describes the system's transmission rate, a service byte that assures 802.11-compatibility, a length byte that specifies the length of the data frame, and a 2-byte CRC (cyclical-redundancy-check) field. Finally, a variable-length data field encapsulates the data-link-layer frame and its payload.
Access to the radio channel uses 802.11's DCF (distributed-coordination-function), which uses CSMA/CA (carrier-sense multiple-access/collision avoidance) arbitration, rather than standard CSMA/CD (collision-detection) Ethernet strategy. This change helps obviate the need for the expensive full-duplex RF hardware that collision-detection requires. Stations first listen for a quiet channel before attempting to transmit; if the channel is busy, a back-off algorithm delays the station's next transmission attempt. A virtual carrier-sensing mechanism complements this process by setting the NAV (network-allocation-vector) value, a decrement-to-zero counter that uses the duration field within most 802.11 frames to reserve the radio channel for a predetermined time. In this way, a station that gains access to the channel can ensure that its data exchange completes without interruption—a so-called atomic exchange.
Collisions can still occur, but receivers must acknowledge successful reception by returning an ACK (acknowledge) signal—without which the transmitter retries a number of times before giving up and signalling an error. If these processes prove inadequate, optional RTS (request-to-send) and CTS (clear-to-send) mechanisms are available. A station that sends RTS silences possible contenders within its vicinity for a time that depends on the NAV value that it transmits. The recipient responds with a CTS signal that similarly clears and reserves the channel within its locale. At the application-software level, you may encounter an 'RTS-threshold' control that applies the RTS/CTS sequence for frames longer than a threshold value—hence improving the reliability of large data-packet exchanges.
Interframe spaces and a variable contention window also play key parts in providing channel access. For example, stations normally wait until the end of the DIFS (distributed-coordination-function interframe space) that follows a channel-busy period before trying to transmit (Figure 3a). In typical operation and following a successful transmission sequence, stations can compete for access immediately after the DIFS period. Stations with high-priority control information, such as RTS/CTS and ACK signals, can attempt transmission after the SFIS (short interframe space); hence, other stations defer until this traffic completes (Figure 3b). If the channel is busy when the station tries to transmit, the station picks a random time slot within the contention window. It then waits until the end of the DIFS period until its time slot arrives, when it tries again. If this transmission attempt also fails, the station picks successively larger back-off periods up to the contention window's maximum size. The contention window remains at this maximum until a transmission succeeds or its retry counter overflows and the transmission attempt aborts. Then, the contention window reverts to its minimum value. These steps ensure that the MAC (medium-access-control) system remains stable, even under heavy loading by multiple competing stations.
Flexible structure underpins services
Three data-link-layer frame types—data, control, and management—underpin 802.11's capabilities. A generic 802.11 MAC data frame starts off with a 16-bit frame-control field, followed by a duration field and three address fields (Figure 4). The first two bits of the frame-control field report the 802.11 protocol version that's in use. The next two bits are the type field, which identifies the three frame types. The next four subtype bits identify the frame's function within the management structure, assisting data delivery and providing MAC-level reliability functions. The frame-control field's second byte comprises individual bit values, such as the ToDS (distribution system) and FromDS distribution bits that specify the frame's routing within the distribution system. Other functions include a more-fragments bit that identifies fragmented data packets; a retry bit that helps eliminate duplicate frames; a power-management bit that enables power-saving modes in mobiles; a more-data bit that access points set to inform a power-saving station that it has data to collect; a WEP (wired-equivalency-privacy) bit that specifies WEP on and off; and an order bit that, when set, reports that frames and fragments are transmitted strictly in sequence.
Following the frame-control field, a 2-byte duration/identification field normally carries the NAV value that represents the number of microseconds that stations can expect the current transmission to occupy. There then follows three address fields, a 16-bit sequence-control field and an optional fourth address field. Like wired Ethernet, wireless stations use 48-bit MAC addresses. In an ad hoc network, this BSSID (basic-service-set identifier) is a randomly generated number that also sets the universal/local bit, thus avoiding contention with traditional IEEE-assigned Ethernet MAC addresses; infrastructure networks use the MAC address of their access point's wireless interface for their BSSID value.
The ToDS and FromDS bits dictate the number and order of the address fields to suit the networking environment (Table 2). The first value is the receiver's address, which is often the data's destination. Alternatively, the destination address may lie within the wired infrastructure, as distinct from the receiver's address, which specifies the access point that routes the frame to its destination. The second address field points to the transmitter, allowing the receiver to acknowledge successful reception. The value in the third field allows the distribution system to filter transmissions, and the optional fourth address enables bridging applications. For example, if the topology is an ad hoc network, there is no distribution system—just a receiver (destination address) and a transmitter (source address). Here, the third address is the BSSID, which devices participating on this network use to filter their data from other wireless networks within the vicinity.
The body of the data frame follows, accommodating as many as 2304 bytes of user data, plus an optional 8 bytes of WEP overhead. The protocol layer can optionally fragment data transmissions to accommodate arbitrary-length exchanges. It can also optionally preserve strict data ordering, but such a step requires more overhead. To help reassemble out-of-order data, the sequence control field that follows the third address field comprises a 4-bit fragment number and a 12-bit sequence number that the transmitter inserts. It allows the receiver to track fragmented packets and discard duplicate frames. The frame ends with a 4-byte frame-check-sequence field that carries a CRC value. Note that a control frame uses the same frame-control field format as the data frame and performs similar administrative functions. However, a station sending control frames omits the data-payload area and minimizes the addressing overhead. For example, an RTS frame comprises the frame-control and duration fields, the receiver's address, and the transmitter's address, and it terminates with the frame-check-sequence.
The transient nature of wireless communications means that stations must regularly broadcast a beacon signal to announce the network's presence. In an ad hoc network, the responsibility for sending beacons distributes among stations; in an infrastructure topology, the access point is the beacon source. Similarly, mobile stations periodically scan the airwaves for a network connection using a probe request. Other stations that receive such a request then determine whether the mobile has compatible parameters and can join in. The station that last transmitted a beacon responds to probe requests with a probe-response frame. Also, access points need to acknowledge one another to support handovers between roaming stations. Management frames carry the information that enables these functions and a host of others, such as association and authentication.
A generic management frame has much the same format as a data frame but substitutes information elements and a number of fixed fields for user data in the frame-body area. The system supervises connections by combining a station's association and authentication states, which you can regard as being in one of three hierarchical connection states at any one time—initial, or no authentication or association; authenticated but as yet not associated; and authenticated and associated (Figure 5). Frame types divide into three classes whose transmission depends on the connection state. Class 1 frames include the basic functions that establish and control communications, such as the beacon, authentication/deauthentication, and the ACK and RTS/CTS sequences. Class 2 frames exclusively manage association and disassociation tasks. Class 3 frames permit the now-connected station to use the distribution services and include the deauthentication frame to terminate the session. Others include any data frame, together with the power-save-poll frame that permits power-saving mobiles to buffer data at access points.
WiMax targets MANs
Good though 802.11 is, it wasn't designed to accommodate MAN (metropolitan-area-network) or wireless Internet-service-provider use. Similarly, it's far too complex for the sort of personal-area-networking market that Bluetooth targets. To fix this problem, the IEEE developed the 802.15.4 RF interface standard, which targets applications with low data rates and limited range requirements. This standard is often called ZigBee, which in truth is a still-evolving product that's based on 802.15.4. Its sponsor, the ZigBee Alliance, appears to be targeting low-power remote-control and monitoring applications. Reference 3describes ZigBee's features and limitations. Also worth watching is ZigBee's 802.15.3 relation, the so-called WiMedia specification for high-rate personal networking, which targets applications such as home-entertainment systems.
At the metropolitan level, 802.11's widely available and cheap hardware has made it irresistible for some applications that exceed its design brief. As a result, the 802.16 or WiMax specification addresses 802.11's limitations for service-provider applications to enable broadband wireless-access systems. (Like Wi-Fi, the WiMax tag follows the WiMax Forum's brief to ensure interoperability for 802.16-compatible equipment.) Because the 802.16 MAC explicitly supports point-to-multipoint wireless access that must interface with the telecommunications infrastructure, profiles exist to support Ethernet/IP and ATM (asynchronous-transfer-mode) environments. The system also supports ATM-compatible QOS (quality-of-service) models—an area that 802.11 is currently trying to improve via ongoing work to the 802.11e specification.
Because QOS overhead and throughput inevitably compete for finite bandwidth, 802.16 includes numerous strategies to balance these needs, such as the ability for stations to dynamically request additional bandwidth. The frame structure permits the system to adaptively assign burst profiles to uplinks and downlinks, depending on link conditions, providing a real-time trade-off between channel capacity and transmission robustness. Variable-length protocol and service-data units allow the protocol to assemble multiple units into a single burst, saving physical-layer transmission overhead. Again, fragmentation permits arbitrary-length transmissions across frame boundaries, but 802.16 includes the ability to manage QOS between competing composite transmissions. A self-correcting bandwidth-request/grant mechanism dispenses with the delays that the traditional acknowledgment sequence causes, and also improves the QOS metric.
On the radio side, the original 802.16 specifications describe operation within the 10- to 66-GHz band. Because frequencies greater than about 11 GHz demand a line-of-sight path, this option suits point-to-point links to about 50 km. Line-of-sight operation virtually obviates multipath effects to allow channels as wide as 28 MHz that furnish a maximum 268-Mbps capacity. FDD (frequency-division-duplex) and TDD (time-division-duplex) physical-layer options are available that transmit using QAM (quadrature-amplitude-modulation) techniques. The FDD option permits both half- and full-duplex terminals. But a typical line-of-sight radio system requires that there are no obstructions, such as trees or buildings, within a roughly elliptical window around the direct transmission path. Obstructions that lie within about 60% of the envelope of this 'Fresnel zone' can severely degrade signal strength. Clearly, this consideration impacts deployment potential, especially for arbitrary locations within metropolitan areas. To ease this situation, 802.16a supports the conventional propagation model within a 2- to 11-GHz band. Here, OFDM (orthogonal frequency-division multiplexing) overcomes variable reception delays, intersymbol interference, and multipath reflections to ease reception within reflective environments to allow a typical cell radius of about 8 km. Several physical-layer profiles are available, but the WiMax Forum is focusing on the 256-point FFT (fast-Fourier transform) OFDM mode as its prime interoperability target.
For the future, modifications to 802.16 in the shape of 802.16e may extend the technology's reach to passenger-transport systems using the 2- to 6-GHz licensed bands. Alternatively, work on the competing 802.20 standard may dominate. According to the latest Revision 13 requirements specification, 802.20 is a 'specification of physical and medium-access-control layers of an air interface for interoperable mobile broadband wireless access systems, operating in licensed bands below 3.5 GHz, optimized for IP-data transport, with peak data rates per user in excess of 1 Mbps. It supports various vehicular mobility classes up to 250 km/hour in a MAN environment and targets spectral efficiencies, sustained user data rates, and numbers of active users that are all significantly higher than achieved by existing mobile systems.' Although the outcome of this battle is uncertain, delivery systems are sure to evolve that extend network access far beyond the limitations of Ethernet's original wired model.
You can reach Contributing Editor David Marsh at email@example.com.