Unified Perception: Efficient Depth-Aware Video Panoptic Segmentation with Minimal Annotation Costs

3 Mar 2023  ·  Kurt Stolle, Gijs Dubbelman ·

Depth-aware video panoptic segmentation is a promising approach to camera based scene understanding. However, the current state-of-the-art methods require costly video annotations and use a complex training pipeline compared to their image-based equivalents. In this paper, we present a new approach titled Unified Perception that achieves state-of-the-art performance without requiring video-based training. Our method employs a simple two-stage cascaded tracking algorithm that (re)uses object embeddings computed in an image-based network. Experimental results on the Cityscapes-DVPS dataset demonstrate that our method achieves an overall DVPQ of 57.1, surpassing state-of-the-art methods. Furthermore, we show that our tracking strategies are effective for long-term object association on KITTI-STEP, achieving an STQ of 59.1 which exceeded the performance of state-of-the-art methods that employ the same backbone network. Code is available at: https://tue-mps.github.io/unipercept

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Depth-aware Video Panoptic Segmentation Cityscapes-DVPS Unified Perception DVPQ 57.1 # 1
Video Panoptic Segmentation KITTI-STEP Unified Perception STQ 59.1 # 6
AQ 56.4 # 6
SQ 61.9 # 6

Methods