no code implementations • 3 Apr 2024 • Fred Hohman, Chaoqun Wang, Jinmook Lee, Jochen Görtler, Dominik Moritz, Jeffrey P Bigham, Zhile Ren, Cecile Foret, Qi Shan, Xiaoyi Zhang
On-device machine learning (ML) moves computation from the cloud to personal devices, protecting user privacy and enabling intelligent user experiences.
no code implementations • 7 Feb 2024 • Chaoqun Wang, Yiran Qin, Zijian Kang, Ningning Ma, Ruimao Zhang
First, a depth estimation (DE) scheme leverages relative depth information to realize the effective feature lifting from 2D to 3D spaces.
1 code implementation • 16 Dec 2023 • Yijun Li, Cheuk Hang Leung, Xiangqian Sun, Chaoqun Wang, Yiyan Huang, Xing Yan, Qi Wu, Dongdong Wang, Zhixiang Huang
Consumer credit services offered by e-commerce platforms provide customers with convenient loan access during shopping and have the potential to stimulate sales.
no code implementations • 27 Oct 2023 • Chaowei Liu, Jichun Li, Yihua Teng, Chaoqun Wang, Nuo Xu, Jihao Wu, Dandan Tu
Thus, we propose DocStormer, a novel algorithm designed to restore multi-degraded colored documents to their potential pristine PDF.
1 code implementation • ICCV 2023 • Yiran Qin, Chaoqun Wang, Zijian Kang, Ningning Ma, Zhen Li, Ruimao Zhang
In this paper, we propose a novel training strategy called SupFusion, which provides an auxiliary feature level supervision for effective LiDAR-Camera fusion and significantly boosts detection performance.
no code implementations • 26 Aug 2023 • Chaoqun Wang, Yijun Li, Xiangqian Sun, Qi Wu, Dongdong Wang, Zhixiang Huang
The tensorized LSTM assigns each variable with a unique hidden state making up a matrix $\mathbf{h}_t$, and the standard LSTM models all the variables with a shared hidden state $\mathbf{H}_t$.
no code implementations • 15 Apr 2023 • Xin Kang, Chaoqun Wang, Xuejin Chen
We design a region-based feature enhancement (RFE) module, which consists of a Semantic-Spatial Region Extraction stage and a Region Dependency Modeling stage.
no code implementations • CVPR 2023 • Jie Yang, Chaoqun Wang, Zhen Li, Junle Wang, Ruimao Zhang
This paper presents Scalable Semantic Transfer (SST), a novel training paradigm, to explore how to leverage the mutual benefits of the data from different label domains (i. e. various levels of label granularity) to train a powerful human parsing network.
1 code implementation • CVPR 2023 • Kangcheng Liu, Xinhu Zheng, Chaoqun Wang, Kai Tang, Ming Liu, Baoquan Chen
The second is that we prevent over-discrimination between 3D segments/objects and encourage grouped foreground-to-background distinctions at the segment level with adaptive feature learning in a Siamese correspondence network, which adaptively learns feature correlations within and across point cloud views effectively.
no code implementations • 23 Nov 2022 • Binxin Yang, Xuejin Chen, Chaoqun Wang, Chi Zhang, Zihan Chen, Xiaoyan Sun
With a semantic feature matching loss for effective semantic supervision, our sketch embedding precisely conveys the semantics in the input sketches to the synthesized images.
no code implementations • 21 Jun 2022 • Jie Yang, Ye Zhu, Chaoqun Wang, Zhen Li, Ruimao Zhang
Integrating multi-modal data to promote medical image analysis has recently gained great attention.
no code implementations • NeurIPS 2021 • Chaoqun Wang, Shaobo Min, Xuejin Chen, Xiaoyan Sun, Houqiang Li
This enables DPPN to produce visual representations with accurate attribute localization ability, which benefits the semantic-visual alignment and representation transferability.
1 code implementation • PRCV 2021 • Shiyu Hou, Chaoqun Wang, Weize Quan, Jingen Jiang, Dong-Ming Yan
The core goal is to improve the accuracy of text detection and recognition by removing the highlight from text images.
no code implementations • 5 Apr 2021 • Chaoqun Wang, Xuejin Chen, Shaobo Min, Xiaoyan Sun, Houqiang Li
First, DCEN leverages task labels to cluster representations of the same semantic category by cross-modal contrastive learning and exploring semantic-visual complementarity.
1 code implementation • 19 Nov 2020 • Xuewei Bian, Chaoqun Wang, Weize Quan, Juntao Ye, Xiaopeng Zhang, Dong-Ming Yan
Specifically, we decouple the text removal problem into text stroke detection and stroke removal.
no code implementations • CVPR 2020 • Chaoqun Wang, Chunyan Xu, Zhen Cui, Ling Zhou, Tong Zhang, Xiaoya Zhang, Jian Yang
Motivated by our observations on RGB-T data that pattern correlations are high-frequently recurred across modalities also along sequence frames, in this paper, we propose a cross-modal pattern-propagation (CMPP) tracking framework to diffuse instance patterns across RGB-T data on spatial domain as well as temporal domain.
Ranked #24 on Rgb-T Tracking on RGBT234
1 code implementation • CVPR 2020 • Shaobo Min, Hantao Yao, Hongtao Xie, Chaoqun Wang, Zheng-Jun Zha, Yongdong Zhang
Recent methods focus on learning a unified semantic-aligned visual representation to transfer knowledge between two domains, while ignoring the effect of semantic-free visual representation in alleviating the biased recognition problem.
3 code implementations • 23 Mar 2019 • Tingguang Li, Danny Ho, Chenming Li, Delong Zhu, Chaoqun Wang, Max Q. -H. Meng
As one of the most promising areas, mobile robots draw much attention these years.
Robotics