no code implementations • 10 Apr 2024 • Xi Chen, Julien Cumin, Fano Ramparany, Dominique Vaufreydaz
This paper presents two models to address the problem of multi-person activity recognition using ambient sensors in a home.
no code implementations • 6 Dec 2023 • Anderson Augusma, Dominique Vaufreydaz, Frédérique Letué
Noticeably, our findings highlight that it is possible to reach this accuracy level with privacy-compliant features using only 5 frames uniformly distributed on the video.
1 code implementation • 4 Jul 2023 • Louis Airale, Dominique Vaufreydaz, Xavier Alameda-Pineda
Animating still face images with deep generative models using a speech input signal is an active research topic and has seen important recent progress.
no code implementations • 30 Nov 2022 • Sotheara Leang, Eric Castelli, Dominique Vaufreydaz, Sethserey Sam
According to the experimental results evaluated on the BRAF100 dataset, the polar coordinates achieved significantly higher accuracy than the angles in the mixed and cross-gender speech recognitions, demonstrating that these representations are superior at defining the acoustic trajectory of the speech signal.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 2 Nov 2022 • Louis Airale, Xavier Alameda-Pineda, Stéphane Lathuilière, Dominique Vaufreydaz
In this work, we address the task of unconditional head motion generation to animate still human faces in a low-dimensional semantic space from a single reference pose.
no code implementations • 1 Sep 2022 • Yangtao Wang, Xi Shen, Yuan Yuan, Yuming Du, Maomao Li, Shell Xu Hu, James L Crowley, Dominique Vaufreydaz
This method also achieves competitive results for unsupervised video object segmentation tasks with the DAVIS, SegTV2, and FBMS datasets.
Ranked #4 on Unsupervised Instance Segmentation on COCO val2017
1 code implementation • CVPR 2022 • Yangtao Wang, Xi Shen, Shell Hu, Yuan Yuan, James Crowley, Dominique Vaufreydaz
For unsupervised saliency detection, we improve IoU for 4. 9%, 5. 2%, 12. 9% on ECSSD, DUTS, DUT-OMRON respectively compared to previous state of the art.
Ranked #1 on Weakly-Supervised Object Localization on CUB
no code implementations • 11 Oct 2021 • Niranjan Deshpande, Dominique Vaufreydaz, Anne Spalanzani
Urban autonomous driving in the presence of pedestrians as vulnerable road users is still a challenging and less examined research problem.
no code implementations • 10 Mar 2021 • Louis Airale, Dominique Vaufreydaz, Xavier Alameda-Pineda
In this paper, we focus on a unimodal representation of interactions and propose to tackle interaction generation in a data-driven fashion.
no code implementations • 26 Oct 2020 • Niranjan Deshpande, Dominique Vaufreydaz, Anne Spalanzani
In this work, a deep reinforcement learning based decision-making approach for high-level driving behavior is proposed for urban environments in the presence of pedestrians.
no code implementations • 15 Sep 2020 • Anastasia Petrova, Dominique Vaufreydaz, Philippe Dessus
This article presents our unimodal privacy-safe and non-individual proposal for the audio-video group emotion recognition subtask at the Emotion Recognition in the Wild (EmotiW) Challenge 2020 1.
no code implementations • 17 Apr 2019 • Justin Le Louedec, Thomas Guntz, James Crowley, Dominique Vaufreydaz
The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces.
no code implementations • 17 Sep 2018 • Pavan Vasishta, Dominique Vaufreydaz, Anne Spalanzani
Autonomous Vehicles navigating in urban areas have a need to understand and predict future pedestrian behavior for safer navigation.
no code implementations • 12 Oct 2017 • Thomas Guntz, Raffaella Balzarini, Dominique Vaufreydaz, James L. Crowley
In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems.
no code implementations • 12 Mar 2015 • Dominique Vaufreydaz, Wafa Johal, Claudine Combe
Within the context of engagement, non-verbal signals are used to communicate the intention of starting the interaction with a partner.