Search Results for author: Satoshi Takabe

Found 11 papers, 3 papers with code

Accelerating Convergence of Stein Variational Gradient Descent via Deep Unfolding

no code implementations23 Feb 2024 Yuya Kawamura, Satoshi Takabe

Stein variational gradient descent (SVGD) is a prominent particle-based variational inference method used for sampling a target distribution.

Bayesian Inference Variational Inference

Convergence Acceleration of Markov Chain Monte Carlo-based Gradient Descent by Deep Unfolding

no code implementations21 Feb 2024 Ryo Hagiwara, Satoshi Takabe

This study proposes a trainable sampling-based solver for combinatorial optimization problems (COPs) using a deep-learning technique called deep unfolding.

Combinatorial Optimization

Deep Unfolded Simulated Bifurcation for Massive MIMO Signal Detection

1 code implementation28 Jun 2023 Satoshi Takabe

Recently, various MIMO signal detectors based on deep learning techniques and quantum(-inspired) algorithms have been proposed to improve the detection performance compared with conventional detectors.

Hubbard-Stratonovich Detector for Simple Trainable MIMO Signal Detection

no code implementations9 Feb 2023 Satoshi Takabe, Takashi Abe

Although DU has a lesser number of trainable parameters than conventional deep neural networks, the computational complexities related to training and execution have been problematic because DU-based MIMO detectors usually utilize matrix inversion to improve their detection performance.

Rolling Shutter Correction

Convergence Acceleration via Chebyshev Step: Plausible Interpretation of Deep-Unfolded Gradient Descent

1 code implementation26 Oct 2020 Satoshi Takabe, Tadashi Wadayama

In the second half of the study, %we apply the theory of Chebyshev steps and Chebyshev-periodical successive over-relaxation (Chebyshev-PSOR) is proposed for accelerating linear/nonlinear fixed-point iterations.

Deep Unfolded Multicast Beamforming

no code implementations20 Apr 2020 Satoshi Takabe, Tadashi Wadayama

Multicast beamforming is a promising technique for multicast communication.

Theoretical Interpretation of Learned Step Size in Deep-Unfolded Gradient Descent

no code implementations15 Jan 2020 Satoshi Takabe, Tadashi Wadayama

In this paper, we provide a theoretical interpretation of the learned step size of deep-unfolded gradient descent (DUGD).

Trainable Projected Gradient Detector for Sparsely Spread Code Division Multiple Access

no code implementations23 Oct 2019 Satoshi Takabe, Yuki Yamauchi, Tadashi Wadayama

In this paper, we propose a novel trainable multiuser detector called sparse trainable projected gradient (STPG) detector, which is based on the notion of deep unfolding.

Complex Trainable ISTA for Linear and Nonlinear Inverse Problems

no code implementations16 Apr 2019 Satoshi Takabe, Tadashi Wadayama, Yonina C. Eldar

Complex-field signal recovery problems from noisy linear/nonlinear measurements appear in many areas of signal processing and wireless communications.

Trainable Projected Gradient Detector for Massive Overloaded MIMO Channels: Data-driven Tuning Approach

1 code implementation25 Dec 2018 Satoshi Takabe, Masayuki Imanishi, Tadashi Wadayama, Ryo Hayakawa, Kazunori Hayashi

This paper presents a deep learning-aided iterative detection algorithm for massive overloaded multiple-input multiple-output (MIMO) systems where the number of transmit antennas $n$ is larger than that of receive antennas $m$.

Deep Learning-Aided Projected Gradient Detector for Massive Overloaded MIMO Channels

no code implementations28 Jun 2018 Satoshi Takabe, Masayuki Imanishi, Tadashi Wadayama, Kazunori Hayashi

The paper presents a deep learning-aided iterative detection algorithm for massive overloaded MIMO systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.