no code implementations • 8 May 2024 • Yaqi Wu, Zhihao Fan, Xiaofeng Chu, Jimmy S. Ren, Xiaoming Li, Zongsheng Yue, Chongyi Li, Shangcheng Zhou, Ruicheng Feng, Yuekun Dai, Peiqing Yang, Chen Change Loy, Senyan Xu, Zhijing Sun, Jiaying Zhu, Yurui Zhu, Xueyang Fu, Zheng-Jun Zha, Jun Cao, Cheng Li, Shu Chen, Liang Ma, Shiyang Zhou, Haijin Zeng, Kai Feng, Yongyong Chen, Jingyong Su, Xianyu Guan, Hongyuan Yu, Cheng Wan, Jiamin Lin, Binnan Han, Yajun Zou, Zhuoyuan Wu, Yuan Huang, Yongsheng Yu, Daoan Zhang, Jizhe Li, Xuanwu Yin, Kunlong Zuo, Yunfan Lu, Yijie Xu, Wenzong Ma, Weiyu Guo, Hui Xiong, Wei Yu, Bingchun Luo, Sabari Nathan, Priya Kansal
The increasing demand for computational photography and imaging on mobile platforms has led to the widespread development and integration of advanced image sensors with novel algorithms in camera systems.
no code implementations • 19 Nov 2023 • Zhenghao Pan, Haijin Zeng, JieZhang Cao, Kai Zhang, Yongyong Chen
Specifically, firstly, we employ a pre-trained diffusion model, which has been trained on a substantial corpus of RGB images, as the generative denoiser within the Plug-and-Play framework for the first time.
no code implementations • 23 Mar 2023 • Haijin Zeng, Kai Feng, Shaoguang Huang, JieZhang Cao, Yongyong Chen, Hongyan zhang, Hiep Luong, Wilfried Philips
The advantage of Maformer is that it can leverage the MSFA information and non-local dependencies present in the data.
no code implementations • 4 Oct 2022 • Honghu Pan, Yongyong Chen, Yunqi He, Xin Li, Zhenyu He
To this end, we propose Flow2Flow, a unified framework that could jointly achieve training sample expansion and cross-modality image generation for V2I person ReID.
no code implementations • 23 Sep 2022 • Honghu Pan, Yongyong Chen, Zhenyu He
To downsample the graph, we propose a multi-head full attention graph pooling (MHFAPool) layer, which integrates the advantages of existing node clustering and node selection pooling methods.
no code implementations • 23 Sep 2022 • Honghu Pan, Qiao Liu, Yongyong Chen, Yunqi He, Yuan Zheng, Feng Zheng, Zhenyu He
Finally, we propose a dual-attention method consisting of node-attention and time-attention to obtain the temporal graph representation from the node embeddings, where the self-attention mechanism is employed to learn the importance of each node and each frame.
no code implementations • 23 Sep 2022 • Honghu Pan, Yongyong Chen, Tingyang Xu, Yunqi He, Zhenyu He
Extensive experiments on two large gait recognition datasets, i. e., CASIA-B and OUMVLP-Pose, demonstrate that our method outperforms the baseline model and existing pose-based methods by a large margin.
no code implementations • 27 Apr 2022 • Haijin Zeng, Shaoguang Huang, Yongyong Chen, Hiep Luong, Wilfried Philips
Based on this fact, we propose a novel TV regularization to simultaneously characterize the sparsity and low-rank priors of the gradient map (LRSTV).
no code implementations • 22 Apr 2022 • Chong Peng, Yiqun Zhang, Yongyong Chen, Zhao Kang, Chenglizhao Chen, Qiang Cheng
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
no code implementations • 8 Jan 2022 • Chong Peng, Yang Liu, Yongyong Chen, Xinxin Wu, Andrew Cheng, Zhao Kang, Chenglizhao Chen, Qiang Cheng
In this paper, we propose a novel nonconvex approach to robust principal component analysis for HSI denoising, which focuses on simultaneously developing more accurate approximations to both rank and column-wise sparsity for the low-rank and sparse components, respectively.
no code implementations • 25 May 2021 • Yang Liu, Qian Zhang, Yongyong Chen, Qiang Cheng, Chong Peng
It is a challenging task to remove heavy and mixed types of noise from Hyperspectral images (HSIs).