Multi-layer Feature Fusion Convolution Network for Audio-visual Speech Enhancement

15 Jan 2021  ·  Xinmeng Xu, Jianjun Hao ·

Speech enhancement can potentially benefit from the visual information from the target speaker, such as lip movement and facial expressions, because the visual aspect of speech is essentially unaffected by acoustic environment. In this paper, we address the problem of enhancing corrupted speech signal from videos by using audio-visual (AV) neural processing. Most of recent AV speech enhancement approaches separately process the acoustic and visual features and fuse them via a simple concatenation operation. Although this strategy is convenient and easy to implement, it comes with two major drawbacks: 1) evidence in speech perception suggests that in humans the AV integration occurs at a very early stage, in contrast to previous models that process the two modalities separately at early stage and combine them only at a later stage, thus making the system less robust, and 2) a simple concatenation does not allow to control how the information from the acoustic and the visual modalities is treated. To overcome these drawbacks, we propose a multi-layer feature fusion convolution network (MFFCN), which separately process acoustic and visual modalities for preserving each modality features while fusing both modalities' features layer by layer in encoding phase for enjoying the human AV speech perception. In addition, considering the balance between the two modalities, we design channel and spectral attention mechanisms to provide additional flexibility in dealing with different types of information expanding the representational ability of the convolution neural network. Experimental results show that the proposed MFFCN demonstrates the performance of the network superior to the state-of-the-art models.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here