no code implementations • 14 Nov 2023 • Hao Quan, Xinjia Li, Dayu Hu, Tianhang Nan, Xiaoyu Cui
The approach enhances the versatility of prototype representations and elevates the efficacy of prototype networks in few-shot pathological image classification tasks.
no code implementations • 2 May 2022 • Xinjia Li, BoYu Chen, Wenlian Lu
The FedDKD introduces a module of decentralized knowledge distillation (DKD) to distill the knowledge of the local models to train the global model by approaching the neural network map average based on the metric of divergence defined in the loss function, other than only averaging parameters as done in literature.