Paper

Zero-shot Object Detection Through Vision-Language Embedding Alignment

Recent approaches have shown that training deep neural networks directly on large-scale image-text pair collections enables zero-shot transfer on various recognition tasks. One central issue is how this can be generalized to object detection, which involves the non-semantic task of localization as well as semantic task of classification. To solve this problem, we introduce a vision-language embedding alignment method that transfers the generalization capabilities of a pretrained model such as CLIP to an object detector like YOLOv5. We formulate a loss function that allows us to align the image and text embeddings from the pretrained model CLIP with the modified semantic prediction head from the detector. With this method, we are able to train an object detector that achieves state-of-the-art performance on the COCO, ILSVRC, and Visual Genome zero-shot detection benchmarks. During inference, our model can be adapted to detect any number of object classes without additional training. We also find that standard object detection scaling can transfer well to our method and find consistent improvements across various scales of YOLOv5 models and the YOLOv3 model. Lastly, we develop a self-labeling method that provides a significant score improvement without needing extra images nor labels.

Results in Papers With Code
(↓ scroll down to see all results)