1 code implementation • 8 Jan 2024 • Ryu Tadokoro, Ryosuke Yamada, Kodai Nakashima, Ryo Nakamura, Hirokatsu Kataoka
From experimental results, we conclude that effective pre-training can be achieved by looking at primitive geometric objects only.
1 code implementation • CVPR Workshop 2023 • Ryu Tadokoro, Ryosuke Yamada, Hirokatsu Kataoka
Inspired by this approach, we propose the Auto-generated Volumetric Shapes Database (AVS-DB) for data-scarce 3D medical image segmentation tasks.
no code implementations • CVPR 2022 • Hirokatsu Kataoka, Ryo Hayamizu, Ryosuke Yamada, Kodai Nakashima, Sora Takashima, Xinyu Zhang, Edgar Josafat Martinez-Noriega, Nakamasa Inoue, Rio Yokota
In the present work, we show that the performance of formula-driven supervised learning (FDSL) can match or even exceed that of ImageNet-21k without the use of real images, human-, and self-supervision during the pre-training of Vision Transformers (ViTs).
1 code implementation • CVPR 2022 • Ryosuke Yamada, Hirokatsu Kataoka, Naoya Chiba, Yukiyasu Domae, Tetsuya OGATA
Moreover, the PC-FractalDB pre-trained model is especially effective in training with limited data.
Ranked #18 on 3D Object Detection on SUN-RGBD val (using extra training data)
2 code implementations • 21 Jan 2021 • Hirokatsu Kataoka, Kazushige Okayasu, Asato Matsumoto, Eisuke Yamagata, Ryosuke Yamada, Nakamasa Inoue, Akio Nakamura, Yutaka Satoh
Is it possible to use convolutional neural networks pre-trained without any natural images to assist natural image understanding?