no code implementations • 8 May 2024 • Prannay Kaul, Zhizhong Li, Hao Yang, Yonatan Dukler, Ashwin Swaminathan, C. J. Taylor, Stefano Soatto
By evaluating a large selection of recent LVLMs using public datasets, we show that an improvement in existing metrics do not lead to a reduction in Type I hallucinations, and that established benchmarks for measuring Type I hallucinations are incomplete.
1 code implementation • ICCV 2023 • Haoqi Wang, Zhizhong Li, Wayne Zhang
We generalize the class vectors found in neural networks to linear subspaces (i. e.~points in the Grassmann manifold) and show that the Grassmann Class Representation (GCR) enables the simultaneous improvement in accuracy and feature transferability.
no code implementations • 20 Sep 2022 • Ke Bai, Aonan Zhang, Zhizhong Li, Ricardo Heano, Chong Wang, Lawrence Carin
In recommendation systems, items are likely to be exposed to various users and we would like to learn about the familiarity of a new user with an existing item.
1 code implementation • CVPR 2022 • Tz-Ying Wu, Gurumurthy Swaminathan, Zhizhong Li, Avinash Ravichandran, Nuno Vasconcelos, Rahul Bhotika, Stefano Soatto
We hypothesize that a strong base model can provide a good representation for novel classes and incremental learning can be done with small adaptations.
2 code implementations • CVPR 2022 • Haoqi Wang, Zhizhong Li, Litong Feng, Wayne Zhang
Most of the existing Out-Of-Distribution (OOD) detection algorithms depend on single input source: the feature, the logit, or the softmax probability.
no code implementations • 29 Sep 2021 • Zhizhong Li, Avinash Ravichandran, Charless Fowlkes, Marzia Polito, Rahul Bhotika, Stefano Soatto
Indeed, we observe experimentally that standard distillation of task-specific teachers, or using these teacher representations directly, **reduces** downstream transferability compared to a task-agnostic generalist model.
1 code implementation • 14 Aug 2021 • Zhanghui Kuang, Hongbin Sun, Zhizhong Li, Xiaoyu Yue, Tsui Hin Lin, Jianyong Chen, Huaqiang Wei, Yiqin Zhu, Tong Gao, Wenwei Zhang, Kai Chen, Wayne Zhang, Dahua Lin
We present MMOCR-an open-source toolbox which provides a comprehensive pipeline for text detection and recognition, as well as their downstream tasks such as named entity recognition and key information extraction.
no code implementations • 16 Jul 2021 • Zhizhong Li, Avinash Ravichandran, Charless Fowlkes, Marzia Polito, Rahul Bhotika, Stefano Soatto
Traditionally, distillation has been used to train a student model to emulate the input/output functionality of a teacher.
1 code implementation • 21 Oct 2020 • Derek Hoiem, Tanmay Gupta, Zhizhong Li, Michal M. Shlapentokh-Rothman
Learning curves model a classifier's test error as a function of the number of training samples.
no code implementations • 2 Feb 2020 • Xingxing Zou, Zhizhong Li, Ke Bai, Dahua Lin, Waikeung Wong
In this paper, we build an outfit evaluation system which provides feedbacks consisting of a judgment with a convincing explanation.
2 code implementations • CVPR 2020 • Hongxu Yin, Pavlo Molchanov, Zhizhong Li, Jose M. Alvarez, Arun Mallya, Derek Hoiem, Niraj K. Jha, Jan Kautz
We introduce DeepInversion, a new method for synthesizing images from the image distribution used to train a deep neural network.
1 code implementation • NeurIPS 2019 • Hao Sun, Zhizhong Li, Xiaotong Liu, Dahua Lin, Bolei Zhou
This approach learns from Hindsight Inverse Dynamics based on Hindsight Experience Replay, enabling the learning process in a self-imitated manner and thus can be trained with supervised learning.
no code implementations • ICCV 2019 2019 • Sijie Yan, Zhizhong Li, Yuanjun Xiong, Huahan Yan
It captures the temporal structure at multiple scales through the GP prior and the temporal convolutions; and establishes the spatial connection between the latent vectors and the skeleton graphs via a novel graph refining scheme.
Ranked #2 on Human action generation on NTU RGB+D
no code implementations • 25 Sep 2019 • Lanxin Lei, Zhizhong Li, Xiaoyang Li, Cong Qiu, Dahua Lin
The estimation of advantage is crucial for a number of reinforcement learning algorithms, as it directly influences the choices of future paths.
no code implementations • 15 Sep 2019 • Lanxin Lei, Zhizhong Li, Dahua Lin
The estimation of advantage is crucial for a number of reinforcement learning algorithms, as it directly influences the choices of future paths.
no code implementations • 16 Aug 2019 • Zhizhong Li, Linjie Luo, Sergey Tulyakov, Qieyun Dai, Derek Hoiem
Our key idea to improve domain adaptation is to introduce a separate anchor task (such as facial landmarks) whose annotations can be obtained at no cost or are already available on both synthetic and real datasets.
no code implementations • 18 Jun 2019 • Xinglong Zhang, Wei Jiang, Shuyou Yu, Xin Xu, Zhizhong Li
So far, many control algorithms have been developed for singularly perturbed systems.
no code implementations • ICLR 2019 • Zhizhong Li, Derek Hoiem
We compare a number of methods from related fields such as calibration and epistemic uncertainty modeling, as well as two proposed methods that reduce overconfident errors of samples from an unknown novel distribution without drastically increasing evaluation time: (1) G-distillation, training an ensemble of classifiers and then distill into a single model using both labeled and unlabeled examples, or (2) NCR, reducing prediction confidence based on its novelty detection score.
1 code implementation • CVPR 2020 • Zhizhong Li, Derek Hoiem
In this paper, we compare and evaluate several methods to improve confidence estimates for unfamiliar and familiar samples.
no code implementations • 8 Jan 2018 • Yu Cheng, Angus Wong, Kevin Hung, Zhizhong Li, Weitong Li, Jun Zhang
That is, the odor datasets are dynamically growing while both training samples and number of classes are increasing over time.
1 code implementation • 25 Oct 2017 • Chuhang Zou, Ruiqi Guo, Zhizhong Li, Derek Hoiem
In this paper, we aim to interpret indoor scenes from one RGBD image.
no code implementations • 7 Sep 2017 • Zhizhong Li, Dahua Lin
Specialized classifiers, namely those dedicated to a subset of classes, are often adopted in real-world recognition systems.
3 code implementations • CVPR 2017 • Xingcheng Zhang, Zhizhong Li, Chen Change Loy, Dahua Lin
A number of studies have shown that increasing the depth or width of convolutional networks is a rewarding approach to improve the performance of image recognition.
10 code implementations • 29 Jun 2016 • Zhizhong Li, Derek Hoiem
We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.
Ranked #4 on Domain 11-5 on Cityscapes
no code implementations • CVPR 2015 • Zhizhong Li, Deli Zhao, Zhouchen Lin, Edward Y. Chang
In the line search step, R3MC approximates the minimum point on the searching curve by minimizing on the line tangent to the curve.