no code implementations • 26 Apr 2024 • Maoxun Yuan, Bo Cui, Tianyi Zhao, Xingxing Wei
Semantic analysis on visible (RGB) and infrared (IR) images has gained attention for its ability to be more accurate and robust under low-illumination and complex weather conditions.
1 code implementation • 19 Jan 2024 • Tianyi Zhao, Maoxun Yuan, Feng Jiang, Nan Wang, Xingxing Wei
Specifically, following this perspective, we design a Redundant Spectrum Removal module to coarsely remove interfering information within each modality and a Dynamic Feature Selection module to finely select the desired features for feature fusion.
1 code implementation • 28 Jun 2023 • Maoxun Yuan, Tianyi Zhao, Bo Li, Xingxing Wei
To address this issue, in this paper we observe that the spatial details from PAN images are mainly high-frequency cues, i. e., the edges reflect the contour of input PAN images.
2 code implementations • 28 Jun 2023 • Maoxun Yuan, Xingxing Wei
In $\mathrm{C}^2$Former, we design an Inter-modality Cross-Attention (ICA) module to obtain the calibrated and complementary features by learning the cross-attention relationship between the RGB and IR modality.
no code implementations • 28 Sep 2022 • Maoxun Yuan, Yinyan Wang, Xingxing Wei
Then, we propose a Translation-Scale-Rotation Alignment (TSRA) module to address the problem by calibrating the feature maps from these two modalities.