Facial-recognition sensors adapt to track emotions, mood, and stress

-March 03, 2016

Image and video sensors have thrived in the golden age of smartphones, making steady advances in areas such as fast auto-focus, low-light sensitivity, and back-illuminated pixel arrays. And now the powerful combination of image sensors and vision processors is opening up new possibilities in the areas of automotive safety, biometrics, and medical.

Today’s CMOS image sensors incorporate highly-adaptive pixel designs that can intelligently sense -- rather than merely capture -- the imaging data, while being paired with intelligent vision processors. Take biometric applications like facial recognition. Omron launched its Human Vision Component (HVC) module way back in 2013 and has advanced it since then through better components as well as improving upon its underlying OKAO face-recognition algorithm. It is now at the point where they can be used to track different facial points to interpret micro-expressions and eye movements, as well as recognize human emotions, moods, and even intentions (Figure 1).

Figure 1 Omron's HVC face-recognition module features its OKAO image recognition algorithm that can detect a range of face features and expressions accurately at high speed.

The module can be embedded into almost any device and for I/O has UART and USB interfaces to deliver facial data. The image resolution is 640×480 pixels and it measures 60×40mm.

In applications such as an advanced driver assistance system (ADAS), an inward-looking ADAS camera can prevent inattentive driving or drowsiness by detecting signs of fatigue or distraction on a driver's face. Also, law enforcement officers can take cues from facial expressions, illumination patterns, and blink and gaze estimation to predict intentions. For business, companies could use facial recognition data to optimize sales calls and business negotiations.

However, the most transformative use of emotion and mood sensing through facial recognition is in store for medical practitioners. The imaging solutions that incorporate computational photography and deep-learning algorithms can help doctors employ facial recognition to better deal with anger management, autism, and depression.

Smartphones and a new breed of wearable devices—badges, headsets, smart glasses, smart watches, etc.—can now sense emotions, moods, and stress levels using programmed expressions such happy, sad, angry, surprised, etc. The wearable gadgets usually capture, process, and analyze the imaging data with the help of a vision processor, and then send the furnished results to a companion device like the smartphone.

Image sensor: smaller and smarter

The breadth and depth of facial recognition applications is leading to design innovations on both image sensor and vision processor frontiers. For instance, while image sensors are reaching the limits of pixel size reduction, they are also moving beyond conventional Bayer filter patterns. As a result, they can incorporate clearer pixels and thus effectively operate under low-light conditions. In other words, image sensors are becoming smarter.

Figure 2 OmniVision's OV2744 1.4-micron PureCel image sensor enables biometric capabilities by providing both a high-quality infrared as well as 1080p high-definition RGB output at 60 fps.

Also, sensor designers like OmniVision are pushing single-camera solutions for image capture and facial recognition, versus using one camera for regular RGB imaging and another for infrared (IR) information. The OV2744 image sensor—based on the company’s RGB-Ir technology—can capture IR high-definition 1080p video at 60 frames per second. It also features high dynamic range (HDR) and less color aliasing.

Computer vision symbiosis

Next up, low-power vision processors allow developers to detect, capture, and recognize a face with features like eye socket contour and chin and nose shape without needing to send the imaging data to the cloud. A new generation of image and vision processors offered by chipmakers like Qualcomm and TI now enable faster image capture, autofocus, and video frame rates.

Moreover, vision processors are getting faster using single instruction multiple data (SIMD) architectures for computational photography and video-acceleration engines, as well as deep learning and other machine-vision algorithms.

It’s also worth noting that image sensors using non-Bayer color filter arrays now come integrated with custom vision processors on a stacked chip. The image sensor market leader, Sony, first got a stacked camera subsystem designed into a tablet in 2013. Now OmniVision and On Semiconductor have followed suit with design wins for die stacks that combine image sensors with a vision processor.

Also see:

Loading comments...

Write a Comment

To comment please Log In