SwinTrack: A Simple and Strong Baseline for Transformer Tracking

2 Dec 2021  ·  Liting Lin, Heng Fan, Zhipeng Zhang, Yong Xu, Haibin Ling ·

Recently Transformer has been largely explored in tracking and shown state-of-the-art (SOTA) performance. However, existing efforts mainly focus on fusing and enhancing features generated by convolutional neural networks (CNNs). The potential of Transformer in representation learning remains under-explored. In this paper, we aim to further unleash the power of Transformer by proposing a simple yet efficient fully-attentional tracker, dubbed SwinTrack, within classic Siamese framework. In particular, both representation learning and feature fusion in SwinTrack leverage the Transformer architecture, enabling better feature interactions for tracking than pure CNN or hybrid CNN-Transformer frameworks. Besides, to further enhance robustness, we present a novel motion token that embeds historical target trajectory to improve tracking by providing temporal context. Our motion token is lightweight with negligible computation but brings clear gains. In our thorough experiments, SwinTrack exceeds existing approaches on multiple benchmarks. Particularly, on the challenging LaSOT, SwinTrack sets a new record with 0.713 SUC score. It also achieves SOTA results on other benchmarks. We expect SwinTrack to serve as a solid baseline for Transformer tracking and facilitate future research. Our codes and results are released at https://github.com/LitingLin/SwinTrack.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Object Tracking GOT-10k SwinTrack-B Average Overlap 69.4 # 18
Success Rate 0.5 78 # 15
Success Rate 0.75 64.3 # 13
Visual Object Tracking LaSOT SwinTrack-B-384 AUC 70.2 # 17
Normalized Precision 78.4 # 16
Precision 75.3 # 15
Visual Object Tracking TrackingNet SwinTrack-B-384 Precision 83.2 # 9
Normalized Precision 88.2 # 13
Accuracy 84 # 10

Methods