2 code implementations • ECCV 2020 • Longrong Yang, Fanman Meng, Hongliang Li, Qingbo Wu, Qishang Cheng
Specifically, in instance segmentation, noisy class labels play different roles in the foreground-background sub-task and the foreground-instance sub-task.
3 code implementations • 6 Apr 2024 • Mingxin Huang, Hongliang Li, Yuliang Liu, Xiang Bai, Lianwen Jin
Subsequently, we introduce a Bridge that connects the locked detector and recognizer through a zero-initialized neural network.
no code implementations • 1 Apr 2024 • Jian Jiao, Yu Dai, Hefei Mei, Heqian Qiu, Chuanyang Gong, Shiyuan Tang, Xinpeng Hao, Hongliang Li
So we propose SNRO, which slightly shifts the features of new classes to remember old classes.
no code implementations • 27 Feb 2024 • Huiyu Xiong, Lanxiao Wang, Heqian Qiu, Taijin Zhao, Benliu Qiu, Hongliang Li
Further, in order to better constrain the knowledge characteristics of old and new tasks at the specific feature level, we have created the Two-stage Knowledge Distillation (TsKD), which is able to learn the new task well while weighing the old task.
no code implementations • 15 Jan 2024 • Mingxin Huang, Dezhi Peng, Hongliang Li, Zhenghao Peng, Chongyu Liu, Dahua Lin, Yuliang Liu, Xiang Bai, Lianwen Jin
In this paper, we propose a new end-to-end scene text spotting framework termed SwinTextSpotter v2, which seeks to find a better synergy between text detection and recognition.
no code implementations • 27 Dec 2023 • Hefei Mei, Taijin Zhao, Shiyuan Tang, Heqian Qiu, Lanxiao Wang, Minjian Zhang, Fanman Meng, Hongliang Li
By transferring the knowledge of IFC from the base training to fine-tuning, the IFC generates plentiful novel samples to calibrate the novel class distribution.
no code implementations • 27 Nov 2023 • Lei Wang, Qingbo Wu, Desen Yuan, King Ngi Ngan, Hongliang Li, Fanman Meng, Linfeng Xu
Learning based image quality assessment (IQA) models have obtained impressive performance with the help of reliable subjective quality labels, where mean opinion score (MOS) is the most popular choice.
no code implementations • 10 Oct 2023 • Zhaofeng Shi, Qingbo Wu, Fanman Meng, Linfeng Xu, Hongliang Li
Firstly, a Cross-modal Cognitive Consensus Inference Module (C3IM) is developed to extract a unified-modal label by integrating audio/visual classification confidence and similarities of modality-agnostic label embeddings.
no code implementations • 18 Sep 2023 • Hongliang Li, Herschel C. Pangborn, Ilya Kovalenko
To improve the scheduling and control of batch manufacturing processes, we propose a system-level energy-efficient Digital Twin framework that considers Time-of-Use (TOU) energy pricing for runtime decision-making.
1 code implementation • 26 Jan 2023 • Linfeng Xu, Qingbo Wu, Lili Pan, Fanman Meng, Hongliang Li, Chiyuan He, Hanxin Wang, Shaoxu Cheng, Yu Dai
However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition.
no code implementations • CVPR 2023 • Benliu Qiu, Hongliang Li, Haitao Wen, Heqian Qiu, Lanxiao Wang, Fanman Meng, Qingbo Wu, Lili Pan
We place continual learning into a causal framework, based on which we find the task-induced bias is reduced naturally by two underlying mechanisms in task and domain incremental learning.
no code implementations • CVPR 2023 • Chao Shang, Hongliang Li, Fanman Meng, Qingbo Wu, Heqian Qiu, Lanxiao Wang
Most existing methods are based on convolutional networks and prevent forgetting through knowledge distillation, which (1) need to add additional convolutional layers to predict new classes, and (2) ignore to distinguish different regions corresponding to old and new classes during knowledge distillation and roughly distill all the features, thus limiting the learning of new classes.
no code implementations • ICCV 2023 • Haoyang Cheng, Haitao Wen, Xiaoliang Zhang, Heqian Qiu, Lanxiao Wang, Hongliang Li
In order to address catastrophic forgetting without overfitting on the rehearsal samples, we propose Augmentation Stability Rehearsal (ASR) in this paper, which selects the most representative and discriminative samples by estimating the augmentation stability for rehearsal.
1 code implementation • 15 Sep 2022 • Rui Ma, Qingbo Wu, King Ngi Ngan, Hongliang Li, Fanman Meng, Linfeng Xu
More specifically, we develop a dynamic parameter isolation strategy to sequentially update the task-specific parameter subsets, which are non-overlapped with each other.
no code implementations • 19 Jul 2022 • Yifan Wang, Lin Zhang, Ran Song, Hongliang Li, Paul L. Rosin, Wei zhang
Specifically, we introduce a knowability-based labeling scheme which can be divided into two steps: 1) Knowability-guided detection of known and unknown samples based on the intrinsic structure of the neighborhoods of samples, where we leverage the first singular vectors of the affinity matrices to obtain the knowability of every target sample.
no code implementations • 16 Jun 2022 • Heqian Qiu, Hongliang Li, Taijin Zhao, Lanxiao Wang, Qingbo Wu, Fanman Meng
Unfortunately, there is no effort to explore crowd understanding in multi-modal domain that bridges natural language and computer vision.
no code implementations • 29 Sep 2021 • Xiao Jing, Zhenwei Zhu, Hongliang Li, Xin Pei, Yoshua Bengio, Tong Che, Hongyong Song
One of the greatest challenges of reinforcement learning is efficient exploration, especially when training signals are sparse or deceptive.
1 code implementation • 5 Apr 2021 • Haoran Wei, Qingbo Wu, Hui Li, King Ngi Ngan, Hongliang Li, Fanman Meng, Linfeng Xu
In this paper, we propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning.
no code implementations • 28 Mar 2021 • Qishang Cheng, Hongliang Li, Qingbo Wu, King Ngi Ngan
Then, we feed the SARs of the whole batch to a normalization function to get the weights for each sample.
no code implementations • 11 Mar 2021 • Jian Xiong, Hao Gao, Miaohui Wang, Hongliang Li, King Ngi Ngan, Weisi Lin
In video-based dynamic point cloud compression (V-PCC), 3D point clouds are projected onto 2D images for compressing with the existing video codecs.
1 code implementation • ICCV 2021 • Heqian Qiu, Hongliang Li, Qingbo Wu, Jianhua Cui, Zichen Song, Lanxiao Wang, Minjian Zhang
In this paper, we propose a novel anchor-free object detection network, called CrossDet, which uses a set of growing cross lines along horizontal and vertical axes as object representations.
no code implementations • 14 Oct 2020 • Hongliang Li, Manish Bhatt, Zhen Qu, Shiming Zhang, Martin C. Hartel, Ali Khademhosseini, Guy Cloutier
It is known that changes in the mechanical properties of tissues are associated with the onset and progression of certain diseases.
no code implementations • 3 Aug 2020 • Hongliang Li, Tal Mezheritsky, Liset Vazquez Romaguera, Samuel Kadoury
Moreover, it is found that the speckle reduction using our deep learning model contributes to improving the 3D registration performance.
1 code implementation • International Conference on Computer Vision Workshops 2019 • Dawei Du, Pengfei Zhu, Longyin Wen, Xiao Bian, Haibin Lin, QinGhua Hu, Tao Peng, Jiayu Zheng, Xinyao Wang, Yue Zhang, Liefeng Bo, Hailin Shi, Rui Zhu, Aashish Kumar, Aijin Li, Almaz Zinollayev, Anuar Askergaliyev, Arne Schumann, Binjie Mao, Byeongwon Lee, Chang Liu, Changrui Chen, Chunhong Pan, Chunlei Huo, Da Yu, Dechun Cong, Dening Zeng, Dheeraj Reddy Pailla, Di Li, Dong Wang, Donghyeon Cho, Dongyu Zhang, Furui Bai, George Jose, Guangyu Gao, Guizhong Liu, Haitao Xiong, Hao Qi, Haoran Wang, Heqian Qiu, Hongliang Li, Huchuan Lu, Ildoo Kim, Jaekyum Kim, Jane Shen, Jihoon Lee, Jing Ge, Jingjing Xu, Jingkai Zhou, Jonas Meier, Jun Won Choi, Junhao Hu, Junyi Zhang, Junying Huang, Kaiqi Huang, Keyang Wang, Lars Sommer, Lei Jin, Lei Zhang
Results of 33 object detection algorithms are presented.
no code implementations • 14 Oct 2019 • Yuwei Yang, Fanman Meng, Hongliang Li, Qingbo Wu, Xiaolong Xu, Shuai Chen
The result by the matrix transformation can be regarded as an attention map with high-level semantic cues, based on which a transformation module can be built simply. The proposed transformation module is a general module that can be used to replace the transformation module in the existing few-shot segmentation frameworks.
Ranked #79 on Few-Shot Semantic Segmentation on PASCAL-5i (5-Shot)
no code implementations • 26 Sep 2019 • Qingbo Wu, Lei Wang, King N. Ngan, Hongliang Li, Fanman Meng, Linfeng Xu
Then, a subjective study is conducted on our DQA database, which collects the subject-rated scores of all de-rained images.
no code implementations • 21 Sep 2019 • Kaixu Huang, Fanman Meng, Hongliang Li, Shuai Chen, Qingbo Wu, King N. Ngan
Moreover, a new orthogonal module and a two-branch based CAM generation method are proposed to generate class regions that are orthogonal and complementary.
no code implementations • 19 Sep 2019 • Yuwei Yang, Fanman Meng, Hongliang Li, King N. Ngan, Qingbo Wu
This paper studies few-shot segmentation, which is a task of predicting foreground mask of unseen classes by a few of annotations only, aided by a set of rich annotations already existed.
no code implementations • 23 Jan 2019 • Fanman Meng, Kaixu Huang, Hongliang Li, Qingbo Wu
Existing method generates class activation map (CAM) by a set of fixed classes (i. e., using all the classes), while the discriminative cues between class pairs are not considered.
no code implementations • 10 Jan 2019 • Lei Ma, Hongliang Li, Qingbo Wu, Fanman Meng, King Ngi Ngan
Finally, we propose a hierarchy neighborhood discriminative hashing loss to unify the single-label and multilabel image retrieval problem with a one-stream deep neural network architecture.
no code implementations • ECCV 2018 • Hengcan Shi, Hongliang Li, Fanman Meng, Qingbo Wu
On the other hand, the relationships of different image regions are not considered as well, even though they are greatly important to eliminate the undesired foreground object in accordance with specific query.
no code implementations • 15 May 2017 • Qingbo Wu, Hongliang Li, Fanman Meng, King N. Ngan
By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient.
no code implementations • CVPR 2016 • Kede Ma, Qingbo Wu, Zhou Wang, Zhengfang Duanmu, Hongwei Yong, Hongliang Li, Lei Zhang
We first build a database that contains 4, 744 source natural images, together with 94, 880 distorted images created from them.