Search Results for author: Sarthak Malik

Found 3 papers, 2 papers with code

Synthesizing Sentiment-Controlled Feedback For Multimodal Text and Image Data

no code implementations12 Feb 2024 Puneet Kumar, Sarthak Malik, Balasubramanian Raman, Xiaobai Li

It implements an interpretability technique to analyze the contribution of textual and visual features during the generation of uncontrolled and controlled feedback.

Decoder Marketing +2

Interpretable Multimodal Emotion Recognition using Hybrid Fusion of Speech and Image Data

1 code implementation25 Aug 2022 Puneet Kumar, Sarthak Malik, Balasubramanian Raman

A new interpretability technique has been developed to identify the important speech & image features leading to the prediction of particular emotion classes.

Multimodal Emotion Recognition

VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition

1 code implementation24 Aug 2022 Puneet Kumar, Sarthak Malik, Balasubramanian Raman, Xiaobai Li

This paper proposes a multimodal emotion recognition system, VIsual Spoken Textual Additive Net (VISTANet), to classify emotions reflected by input containing image, speech, and text into discrete classes.

Multimodal Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.