no code implementations • 22 Jul 2022 • Aditya Arie Nugraha, Kouhei Sekiguchi, Mathieu Fontaine, Yoshiaki Bando, Kazuyoshi Yoshii
Our DNN-free system leverages the posteriors of the latest source spectrograms given by block-online FastMNMF to derive the current source covariance matrices for frame-online beamforming.
1 code implementation • 15 Jul 2022 • Kouhei Sekiguchi, Aditya Arie Nugraha, Yicheng Du, Yoshiaki Bando, Mathieu Fontaine, Kazuyoshi Yoshii
This paper describes the practical response- and performance-aware development of online speech enhancement for an augmented reality (AR) headset that helps a user understand conversations made in real noisy echoic environments (e. g., cocktail party).
no code implementations • 15 Jul 2022 • Yicheng Du, Aditya Arie Nugraha, Kouhei Sekiguchi, Yoshiaki Bando, Mathieu Fontaine, Kazuyoshi Yoshii
This paper describes noisy speech recognition for an augmented reality headset that helps verbal communication within real multiparty conversational environments.
Ranked #1 on Speech Enhancement on EasyCom (SDR metric)
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 11 May 2022 • Mathieu Fontaine, Kouhei Sekiguchi, Aditya Nugraha, Yoshiaki Bando, Kazuyoshi Yoshii
This paper describes heavy-tailed extensions of a state-of-the-art versatile blind source separation method called fast multichannel nonnegative matrix factorization (FastMNMF) from a unified point of view.
1 code implementation • IEEE/ACM Transactions on Audio, Speech, and Language Processing 2019 • Kouhei Sekiguchi, Yoshiaki Bando, Aditya Arie Nugraha, Kazuyoshi Yoshii, Tatsuya Kawahara
To solve this problem, we replace a low-rank speech model with a deep generative speech model, i. e., formulate a probabilistic model of noisy speech by integrating a deep speech model, a low-rank noise model, and a full-rank or rank-1 model of spatial characteristics of speech and noise.
no code implementations • 8 Mar 2019 • Aditya Arie Nugraha, Kouhei Sekiguchi, Kazuyoshi Yoshii
To improve the consistency of the phase values in the time-frequency domain, we also apply the von Mises distribution to the phase derivatives, i. e., the group delay and the instantaneous frequency.
2 code implementations • European Association for Signal Processing (EUSIPCO) 2019 • Kouhei Sekiguchi, Aditya Arie Nugraha, Yoshiaki Bando, Kazuyoshi Yoshii
A popular approach to multichannel source separation is to integrate a spatial model with a source model for estimating the spatial covariance matrices (SCMs) and power spectral densities (PSDs) of each sound source in the time-frequency domain.