Spread Spectrum—A safe haven for wireless consumer applications—Part I

Rahul Garg, Prakhar Goyal, Cypress Semiconductor -November 29, 2012

The wireless communication industry started to blossom in 1915 when the first wireless voice transmission was successfully accomplished. It was not long after that that commercial radio broadcast (1920), police car dispatch radios (1921) and the first around the world phone call (1935) came into existence. The increasing acquaintance of the commercial world with the wireless technology gave birth to a ‘radio boom’ all across the world. In the initial stages of this boom, there were few restrictions on the usage of frequency bands (channels), eventually resulting in unmanageable radio traffic. The channels became noisy which in turn started affecting the quality of communication.

It was the gravity of these issues which led to the introduction of the concept of ‘licensed bands’ in order to regulate traffic on the RF bands. But even with these legislative measures, further technological enhancements were required to suppress interference issues in order to accommodate more users per channel. Moreover, licensing could not be implemented for every band because reuse of a frequency band is also important for short-range applications. For instance, if a channel is being used for communication only inside a building, its use in a physically distinct location should not be prohibited. Such a restriction would result in underutilization of the spectrum because these systems will never interfere with each other. However, since any number of users could use a license-free band, the technological enhancements required to mitigate interference issues were even more critical. Spread spectrum techniques were one of those enhancements. Even though the concept of spread spectrum was first introduced in the early 1940’s, it didn’t find much popularity until the 1980’s when the military started using it for its additional advantages like data security and intrinsic immunity against signal jammers.

Today, a whole gamut of consumer applications including Wi-Fi networks, Human Interface Devices (HID), RF IDs, wireless headsets, home automation systems, small scale sensor networks, etc. use the license-free 2.4GHz ISM band. Since these systems are usually highly collocated, Spread Spectrum techniques are used to suppress the interference issues and to increase the number of users that can be simultaneously be accommodated in a channel.

What is Spread Spectrum?

By definition, spread spectrum is a means of transmission in which the signal occupies a bandwidth in excess of the minimum bandwidth necessary to send the information. Using Spread Spectrum techniques, the information contained in a narrow band of frequencies (fm) is translated (or spread) to a wider band (fs) before transmission (Fig. 1). At first, it might appear that this translation will significantly increase the total power required for transmission. But this doesn’t happen at all because the duration for which the transmission is happening remains the same. It is just the frequency at which we are transmitting, at a particular instant of time that is changing.

The spreading of spectrum is achieved by two methods – Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS), which will be discussed in greater detail.


It is important to note that many wireless communication protocols, including like Bluetooth, use spread spectrum techniques at the physical layer level. Their upper layers like the application layer and network layer might be entirely different from each other. The discussion here is restricted to the physical layer aspects only.

Why Spread the Spectrum?
The spreading of spectrum might appear as “waste of bandwidth”, but it is essentially required in order to increase the capacity of the channel (i.e. to accommodate a higher number of users). This relationship between channel capacity and channel bandwidth can be well understood by the “Shannon and Hartley channel-capacity theorem” (Eq. 1):


C = Channel capacity or the max number of users which can access the channel simultaneously
= Bandwidth of the channel
= Signal to Noise ratio (or SNR)

Looking at equation 1, it is reasonable to assume that the channel Capacity to Bandwidth ratio is directly proportional to the required Signal to Noise ratio of the system (Eq. 2). The relationship is not linear though.

Loading comments...

Write a Comment

To comment please Log In