1 code implementation • 13 Feb 2024 • Maneesh Bilalpur, Mert Inan, Dorsa Zeinali, Jeffrey F. Cohn, Malihe Alikhani
To improve the rapport-building capabilities of embodied agents we annotated backchannel smiles in videos of intimate face-to-face conversations over topics such as mental health, illness, and relationships.
1 code implementation • 13 Jun 2023 • Torsten Wörtwein, Nicholas Allen, Lisa B. Sheeber, Randy P. Auerbach, Jeffrey F. Cohn, Louis-Philippe Morency
Empirically, we observe that NME improves performance across six unimodal and multimodal datasets, including a smartphone dataset to predict daily mood and a mother-adolescent dataset to predict affective state sequences where half the mothers experience at least moderate symptoms of depression.
1 code implementation • 12 Jan 2021 • Ognjen Rudovic, Nicolas Tobis, Sebastian Kaltwang, Björn Schuller, Daniel Rueckert, Jeffrey F. Cohn, Rosalind W. Picard
A potential approach to tackling this is Federated Learning (FL), which enables multiple parties to collaboratively learn a shared prediction model by using parameters of locally trained models while keeping raw training data locally.
no code implementations • 14 Feb 2017 • Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, László A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, Maja Pantic
The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views.
no code implementations • 2 Aug 2016 • Wen-Sheng Chu, Fernando de la Torre, Jeffrey F. Cohn
To model temporal dependencies, Long Short-Term Memory (LSTMs) are stacked on top of these representations, regardless of the lengths of input videos.
no code implementations • 10 Jun 2016 • Ciprian Corneanu, Marc Oliu, Jeffrey F. Cohn, Sergio Escalera
Facial expressions are an important way through which humans interact socially.
no code implementations • CVPR 2016 • Zheng Zhang, Jeff M. Girard, Yue Wu, Xing Zhang, Peng Liu, Umur Ciftci, Shaun Canavan, Michael Reale, Andy Horowitz, Huiyuan Yang, Jeffrey F. Cohn, Qiang Ji, Lijun Yin
The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection.
no code implementations • CVPR 2016 • Sergey Tulyakov, Xavier Alameda-Pineda, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn, Nicu Sebe
Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR).
no code implementations • ICCV 2015 • Wen-Sheng Chu, Jiabei Zeng, Fernando de la Torre, Jeffrey F. Cohn, Daniel S. Messinger
We evaluate the effectiveness of our approach in multiple databases, including human actions using the CMU Mocap dataset, spontaneous facial behaviors using group-formation task dataset and parent-infant interaction dataset.
no code implementations • ICCV 2015 • Jiabei Zeng, Wen-Sheng Chu, Fernando de la Torre, Jeffrey F. Cohn, Zhang Xiong
Varied sources of error contribute to the challenge of facial action unit detection.
no code implementations • CVPR 2015 • Kaili Zhao, Wen-Sheng Chu, Fernando de la Torre, Jeffrey F. Cohn, Honggang Zhang
The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS).