Search Results for author: Maximilian Strake

Found 5 papers, 0 papers with code

EffCRN: An Efficient Convolutional Recurrent Network for High-Performance Speech Enhancement

no code implementations5 Jun 2023 Marvin Sach, Jan Franzen, Bruno Defraene, Kristoff Fluyt, Maximilian Strake, Wouter Tirry, Tim Fingscheidt

By applying a number of topological changes at once, we propose both an efficient FCRN (FCRN15), and a new family of efficient convolutional recurrent neural networks (EffCRN23, EffCRN23lite).

Speech Enhancement

Does a PESQNet (Loss) Require a Clean Reference Input? The Original PESQ Does, But ACR Listening Tests Don't

no code implementations4 May 2022 Ziyi Xu, Maximilian Strake, Tim Fingscheidt

Detailed analyses show that the DNS trained with the MF-intrusive PESQNet outperforms the Interspeech 2021 DNS Challenge baseline and the same DNS trained with an MSE loss by 0. 23 and 0. 12 PESQ points, respectively.

Y$^2$-Net FCRN for Acoustic Echo and Noise Suppression

no code implementations31 Mar 2021 Ernst Seidel, Jan Franzen, Maximilian Strake, Tim Fingscheidt

The proposed models achieved remarkable performance for the separate tasks of AEC and residual echo suppression (RES).

Acoustic echo cancellation

Deep Noise Suppression With Non-Intrusive PESQNet Supervision Enabling the Use of Real Training Data

no code implementations31 Mar 2021 Ziyi Xu, Maximilian Strake, Tim Fingscheidt

During the training process, most of the speech enhancement neural networks are trained in a fully supervised way with losses requiring noisy speech to be synthesized by clean speech and additive noise.

Denoising Speech Enhancement

Cannot find the paper you are looking for? You can Submit a new open access paper.