no code implementations • 22 Feb 2024 • Shaojie Zhang, Yinghui Wang, Jiaxing Ma, Wei Li, Jinlong Yang, Tao Yan, Yukai Wang, Liangyi Huang, Mingfeng Wang, Ibragim R. Atadjanov
In Visual SLAM, achieving accurate feature matching consumes a significant amount of time, severely impacting the real-time performance of the system.
no code implementations • 21 Feb 2024 • Shaojie Zhang, Yinghui Wang, Jiaxing Ma, Wei Li, Jinlong Yang, Tao Yan, Yukai Wang, Liangyi Huang, Mingfeng Wang, Ibragim R. Atadjanov
Feature matching is a fundamental and crucial process in visual SLAM, and precision has always been a challenging issue in feature matching.
no code implementations • 18 Feb 2024 • Shaojie Zhang, Yinghui Wang, Bin Nan, Wei Li, Jinlong Yang, Tao Yan, Yukai Wang, Liangyi Huang, Mingfeng Wang, Ibragim R. Atadjanov
To address the issue of increased triangulation uncertainty caused by selecting views with small camera baselines in Structure from Motion (SFM) view selection, this paper proposes a robust error-resistant view selection method.
no code implementations • 15 Feb 2024 • Shaojie Zhang, Yinghui Wang, Bin Nan, Wei Li, Jinlong Yang, Tao Yan, Yukai Wang, Liangyi Huang, Mingfeng Wang, Ibragim R. Atadjanov
To address the issue of feature descriptors being ineffective in representing grayscale feature information when images undergo high affine transformations, leading to a rapid decline in feature matching accuracy, this paper proposes a region feature descriptor based on simulating affine transformations using classification.
no code implementations • 11 Feb 2024 • Shaojie Zhang, Yinghui Wang, Peixuan Liu, Wei Li, Jinlong Yang, Tao Yan, Yukai Wang, Liangyi Huang, Mingfeng Wang, Ibragim R. Atadjanov
The images captured by Wireless Capsule Endoscopy (WCE) always exhibit specular reflections, and removing highlights while preserving the color and texture in the region remains a challenge.
no code implementations • 12 Sep 2023 • Yao Feng, Weiyang Liu, Timo Bolkart, Jinlong Yang, Marc Pollefeys, Michael J. Black
Towards this end, both explicit and implicit 3D representations are heavily studied for a holistic modeling and capture of the whole human (e. g., body, clothing, face and hair), but neither representation is an optimal choice in terms of representation efficacy since different parts of the human avatar have different modeling desiderata.
no code implementations • CVPR 2024 • Soubhik Sanyal, Partha Ghosh, Jinlong Yang, Michael J. Black, Justus Thies, Timo Bolkart
We use intermediate activations of the learned geometry model to condition our texture generator.
2 code implementations • CVPR 2023 • Michael J. Black, Priyanka Patel, Joachim Tesch, Jinlong Yang
BEDLAM is useful for a variety of tasks and all images, ground truth bodies, 3D clothing, support code, and more are available for research purposes.
no code implementations • ICCV 2023 • Zijian Dong, Xu Chen, Jinlong Yang, Michael J. Black, Otmar Hilliges, Andreas Geiger
The key to progress is hence to learn generative models of 3D avatars from abundant unstructured 2D image collections.
1 code implementation • CVPR 2023 • Yuliang Xiu, Jinlong Yang, Xu Cao, Dimitrios Tzionas, Michael J. Black
To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body.
Ranked #7 on 3D Human Reconstruction on CustomHumans
1 code implementation • 4 Oct 2022 • Yao Feng, Jinlong Yang, Marc Pollefeys, Michael J. Black, Timo Bolkart
Building on this insight, we propose SCARF (Segmented Clothed Avatar Radiance Field), a hybrid model combining a mesh-based body with a neural radiance field.
no code implementations • Journal of Petroleum Science and Engineering 2022 • Chunhua Lu, Hanqiao Jiang, Jinlong Yang, Zhiqiang Wang, Miao Zhang, Junjian Li *
The results reveal that DNN exhibit best production prediction accuracy compared to RF and SVM.
no code implementations • 14 Sep 2022 • Qianli Ma, Jinlong Yang, Michael J. Black, Siyu Tang
Specifically, we extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape" that can capture the rough surface geometry of clothing like skirts.
no code implementations • CVPR 2022 • Xu Chen, Tianjian Jiang, Jie Song, Jinlong Yang, Michael J. Black, Andreas Geiger, Otmar Hilliges
Furthermore, we show that our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
2 code implementations • CVPR 2022 • Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, Michael J. Black
First, ICON infers detailed clothed-human normals (front/back) conditioned on the SMPL(-X) normals.
Ranked #1 on 3D Human Reconstruction on CAPE
no code implementations • ICCV 2021 • Qianli Ma, Jinlong Yang, Siyu Tang, Michael J. Black
The geometry feature can be optimized to fit a previously unseen scan of a person in clothing, enabling the scan to be reposed realistically.
1 code implementation • CVPR 2021 • Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
We demonstrate the efficacy of our surface representation by learning models of complex clothing from point clouds.
2 code implementations • CVPR 2021 • Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
4 code implementations • 10 Aug 2020 • Korrawe Karunratanakul, Jinlong Yang, Yan Zhang, Michael Black, Krikamol Muandet, Siyu Tang
Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud.
1 code implementation • CVPR 2020 • Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, Michael J. Black
To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.
no code implementations • ECCV 2018 • Jinlong Yang, Jean-Sebastien Franco, Franck Hetroy-Wheeler, Stefanie Wuhrer
Recent capture technologies and methods allow not only to retrieve 3D model sequence of moving people in clothing, but also to separate and extract the underlying body geometry, motion component and the clothing as a geometric layer.