SmartEverything and the rise of the microphone array

-February 04, 2017

Over ten years ago, a major smartphone manufacturer developed a demo of a smartphone with a ten-microphone array. It could pick out and hear a single person’s voice in a crowd - an amazing feature with obvious market potential. But the company predicted that 90% of such devices would fail in the field within six months. The compounded fragility of ten microphones killed the concept and was a brutal reminder that microphones are fundamentally mechanical devices.

It was a setback to an idea that now seems inevitable. As electronics become smarter and more pervasive, the screen-based user interfaces of the last decade have not kept up. The impending swarm of “SmartEverything” devices – smartphones, speakers, TVs, wearables/hearables, light bulbs, kitchen appliances, connected/autonomous cars, robots, drones, virtual/augmented reality and entire buildings – demand a form of interaction less obtrusive and more intuitive than small and large screens. Voice interfaces are the obvious candidate, and microphone arrays are the critical component.


But how do we avoid the pitfalls of that early demo? How do we make these arrays last? The first step is to understand the problems that are intrinsic to the architecture of capacitive MEMS microphones. Only by moving to piezoelectric MEMS can we truly eliminate such problems.

 

Arrays and the human ear


Like most mammals, we have two ears. Their shape and position allow us to find the origin of sounds in our surroundings. This is so natural that we will spin around when we hear unexpected sounds, to help us locate their source. These stereo-acoustic abilities are a constant aid and help to protect our lives. They are a testament to the power of directional audio.


Advanced MEMS microphones improve on nature. We can build very large microphone arrays with sophisticated processing algorithms to pinpoint the origin of sounds, hone in on a specific source (such as one person’s voice) or pointedly ignore unwanted sounds (such as the roar of a ventilator duct). These microphone arrays give us a much richer set of acoustic experiences, a greater understanding of our surroundings and more control over our environment.


How does this work? As sounds travels with a finite speed, the wavefront of a sound wave passing over an array of microphones may reach each microphone at a slightly different time. We can exploit this time difference to triangulate the origin of the sound. If a dog barks on my left, my left ear will hear the bark sooner and more loudly than my right ear. The human brain naturally decodes these signals to decipher the location of the dog.




Figure 1 Directional hearing with two ears (left) and two microphones (right)

 

We can scale this principle to much larger arrays, from the seven microphones in the Amazon Echo to the 300+ microphones in Squarehead Technology’s AudioScope. The AudioScope is an ultra-sensitive, disc-shaped array, which when mounted 30 feet above a packed basketball stadium, can pick out the sound of the assistant coach popping his bubblegum in the crowd.

 



Figure 2 Squarehead Technology’s Audioscope array (left) and controls (right)
Image Credit
: Squarehead Technology

The use of microphone arrays goes far beyond improving our listening. Every major technology company is now deeply invested in the field of computational linguistics – teaching our connected devices to understand natural human speech. But to understand speech the way we can, they must also hear as clearly as we do. They must emulate the directional, long-ranged hearing that we do instinctively.


Imagine an ‘Arrayphone’ – a future device studded with microphones, powerful and portable, and carried by everyone. How might you use it daily?


  1. You are getting dressed to leave your house, and the Arrayphone is 10 feet away. Your hands are occupied, but you have questions to ask and things to do. How cold is it? Will it rain today? Whom will you meet? As you leave, are the lights off and the door locked?
  2. You are talking to friends in a noisy room, and you cannot hear them over the crowd. You plug your headphones into the Arrayphone and ask it to reduce the background noise. It finds and clarifies your companions’ voices, blocking out the outside world.
  3. Your car is making a strange sound, and you don’t feel safe driving it. You open the hood and turn on your Arrayphone. It tells you which part of the car is making the sound and suggests how to fix it.



Figure 3 Amazon Echo (left), ClearOne Beamforming Microphone Array (center), GFaI Acoustic Camera (right) Image Credit: Amazon, ClearOne, GFaI

 

None of these scenarios are science fiction. Figure 3 shows real products that already cater to each need. They may be raw, unwieldy or expensive, but the promise is real, and the technology is improving. Microphone arrays play a crucial role in computerizing our world.


Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES