no code implementations • 7 Apr 2023 • Mana Masuda, Yusuke Sekikawa, Hideo Saito
To enable the computation of the temporal gradient of the scene, we augment NeRF's camera pose as a time function.
1 code implementation • 7 Apr 2023 • Mana Masuda, Ryo Hachiuma, Ryo Fujii, Hideo Saito, Yusuke Sekikawa
We propose a deep variational autoencoder-based unsupervised anomaly detection network adapted to the 3D point cloud and an anomaly score specifically for 3D point clouds.
1 code implementation • CVPR 2023 • Jun Nagata, Yusuke Sekikawa
An existing method using local plane fitting of events could utilize the sparsity to realize incremental updates for low-latency estimation; however, its output is merely a normal component of the full optical flow.
1 code implementation • 25 Mar 2022 • Pablo Cervantes, Yusuke Sekikawa, Ikuro Sato, Koichi Shinoda
We confirm that our method with a Transformer decoder outperforms all relevant methods on HumanAct12, NTU-RGBD, and UESTC datasets in terms of realism and diversity of generated motions.
no code implementations • 6 Nov 2021 • Mana Masuda, Yusuke Sekikawa, Ryo Fujii, Hideo Saito
Our framework use pre-trained event generation MLP named implicit event generator (IEG) and does motion tracking by updating its state (position and velocity) based on the difference between the observed event and generated event from the current state estimate.
no code implementations • 13 Nov 2020 • Yusuke Sekikawa, Teppei Suzuki
Aiming at drastic speedup for point-feature embeddings at test time, we propose a new framework that uses a pair of multi-layer perceptrons (MLP) and a lookup table (LUT) to transform point-coordinate inputs into high-dimensional features.
no code implementations • 31 Jul 2020 • Teppei Suzuki, Keisuke Ozawa, Yusuke Sekikawa
PointNet, which is the widely used point-wise embedding method and known as a universal approximator for continuous set functions, can process one million points per second.
no code implementations • 3 Feb 2020 • Akiyoshi Kurobe, Yusuke Sekikawa, Kohta Ishikawa, and Hideo Saito
For comparison, we also developed a novel deep learning approach (DirectNet) that directly regresses the pose between point clouds.
no code implementations • 23 Nov 2019 • Yusuke Sekikawa, Teppei Suzuki
Aiming at a drastic speedup for point-data embeddings at test time, we propose a new framework that uses a pair of multi-layer perceptron (MLP) and look-up table (LUT) to transform point-coordinate inputs into high-dimensional features.
no code implementations • CVPR 2019 • Yusuke Sekikawa, Kosuke Hara, Hideo Saito
Event cameras are bio-inspired vision sensors that mimic retinas to asynchronously report per-pixel intensity changes rather than outputting an actual intensity image at regular intervals.