Exemplar-free Online Continual Learning

11 Feb 2022  ·  Jiangpeng He, Fengqing Zhu ·

Targeted for real world scenarios, online continual learning aims to learn new tasks from sequentially available data under the condition that each data is observed only once by the learner. Though recent works have made remarkable achievements by storing part of learned task data as exemplars for knowledge replay, the performance is greatly relied on the size of stored exemplars while the storage consumption is a significant constraint in continual learning. In addition, storing exemplars may not always be feasible for certain applications due to privacy concerns. In this work, we propose a novel exemplar-free method by leveraging nearest-class-mean (NCM) classifier where the class mean is estimated during training phase on all data seen so far through online mean update criteria. We focus on image classification task and conduct extensive experiments on benchmark datasets including CIFAR-100 and Food-1k. The results demonstrate that our method without using any exemplar outperforms state-of-the-art exemplar-based approaches with large margins under standard protocol (20 exemplars per class) and is able to achieve competitive performance even with larger exemplar size (100 exemplars per class).

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here