PEAF: Learnable Power Efficient Analog Acoustic Features for Audio Recognition

7 Oct 2021  ·  Boris Bergsma, Minhao Yang, Milos Cernak ·

At the end of Moore's law, new computing paradigms are required to prolong the battery life of wearable and IoT smart audio devices. Theoretical analysis and physical validation have shown that analog signal processing (ASP) can be more power-efficient than its digital counterpart in the realm of low-to-medium signal-to-noise ratio applications. In addition, ASP allows a direct interface with an analog microphone without a power-hungry analog-to-digital converter. Here, we present power-efficient analog acoustic features (PEAF) that are validated by fabricated CMOS chips for running audio recognition. Linear, non-linear, and learnable PEAF variants are evaluated on two speech processing tasks that are demanded in many battery-operated devices: wake word detection (WWD) and keyword spotting (KWS). Compared to digital acoustic features, higher power efficiency with competitive classification accuracy can be obtained. A novel theoretical framework based on information theory is established to analyze the information flow in each individual stage of the feature extraction pipeline. The analysis identifies the information bottleneck and helps improve the KWS accuracy by up to 7%. This work may pave the way to building more power-efficient smart audio devices with best-in-class inference performance.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here