no code implementations • 25 Nov 2019 • Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang, Chao Yao
For the same action, the knowledge learned from different media types, e. g., videos and images, may be related and complementary.
no code implementations • 18 Sep 2019 • Yang Liu, Zhaoyang Lu, Jing Li, Chao Yao, Yanzi Deng
However, the infrared action data is limited until now, which degrades the performance of infrared action recognition.
no code implementations • 18 Sep 2019 • Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang, Chao Yao
Existing methods for infrared action recognition are either based on spatial or local temporal information, however, the global temporal information, which can better describe the movements of body parts across the whole video, is not considered.
no code implementations • 3 Sep 2018 • Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang
In order to make the feature representations of videos across views transferable, we then learn a transferable dictionary pair simultaneously from pairs of videos taken at different views to encourage each action video across views to have the same sparse representation.