no code implementations • 7 Jul 2021 • Sachihiro Youoku, Takahisa Yamamoto, Junya Saito, Akiyoshi Uchida, Xiaoyu Mi, Ziqiang Shi, Liu Liu, Zhongling Liu, Osafumi Nakayama, Kentaro Murase
Therefore, after learning the common features for each frame, we constructed a facial expression estimation model and valence-arousal model using time-series data after combining the common features and the standardized features for each video.
no code implementations • 7 Jul 2021 • Junya Saito, Xiaoyu Mi, Akiyoshi Uchida, Sachihiro Youoku, Takahisa Yamamoto, Kentaro Murase, Osafumi Nakayama
Facial Action Units (AUs) represent a set of facial muscular activities and various combinations of AUs can represent a wide range of emotions.
no code implementations • 1 Oct 2020 • Junya Saito, Ryosuke Kawamura, Akiyoshi Uchida, Sachihiro Youoku, Yuushi Toyoda, Takahisa Yamamoto, Xiaoyu Mi, Kentaro Murase
In this paper, we propose a new automatic Action Units (AUs) recognition method used in a competition, Affective Behavior Analysis in-the-wild (ABAW).
1 code implementation • 29 Sep 2020 • Sachihiro Youoku, Yuushi Toyoda, Takahisa Yamamoto, Junya Saito, Ryosuke Kawamura, Xiaoyu Mi, Kentaro Murase
Then, we generated affective recognition models for each time window and ensembled these models together.