no code implementations • 16 Mar 2024 • Mahdi Alehdaghi, Pourya Shamsolmoali, Rafael M. O. Cruz, Eric Granger
In particular, our method minimizes the cross-modal gap by identifying and aligning shared prototypes that capture key discriminative features across modalities, then uses multiple bridging steps based on this information to enhance the feature representation.
no code implementations • 6 Jul 2023 • Mahdi Alehdaghi, Arthur Josi, Pourya Shamsolmoali, Rafael M. O. Cruz, Eric Granger
In this paper, the Adaptive Generation of Privileged Intermediate Information training approach is introduced to adapt and generate a virtual domain that bridges discriminant information between the V and I modalities.
1 code implementation • 29 Apr 2023 • Arthur Josi, Mahdi Alehdaghi, Rafael M. O. Cruz, Eric Granger
For realistic evaluation of multimodal (and cross-modal) V-I person ReID models, we propose new challenging corrupted datasets for scenarios where V and I cameras are co-located (CL) and not co-located (NCL).
1 code implementation • 22 Nov 2022 • Arthur Josi, Mahdi Alehdaghi, Rafael M. O. Cruz, Eric Granger
Several deep learning models have been proposed for visible-infrared (V-I) person ReID to recognize individuals from images captured using RGB and IR cameras.
1 code implementation • 19 Sep 2022 • Mahdi Alehdaghi, Arthur Josi, Rafael M. O. Cruz, Eric Granger
% This paper introduces a novel approach for a creating intermediate virtual domain that acts as bridges between the two main domains (i. e., RGB and IR modalities) during training.