1 code implementation • 14 Jun 2023 • Jicheng Li, Vuthea Chheang, Pinar Kullu, Eli Brignac, Zhang Guo, Kenneth E. Barner, Anjana Bhat, Roghayeh Leila Barmaki
This work presents a novel privacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark dataset, collected from play therapy interventions of children with Autism.
no code implementations • 27 Oct 2020 • Xinjie Lan, Kenneth E. Barner
Based on the probabilistic explanations for MLPs, we improve the information-theoretic interpretability of MLPs in three aspects: (i) the random variable of f is discrete and the corresponding entropy is finite; (ii) the information bottleneck theory cannot correctly explain the information flow in MLPs if we take into account the back-propagation; and (iii) we propose novel information-theoretic explanations for the generalization of MLPs.
no code implementations • 16 Jun 2020 • Xinjie Lan, Xin Guo, Kenneth E. Barner
We study PAC-Bayesian generalization bounds for Multilayer Perceptrons (MLPs) with the cross entropy loss.
no code implementations • 20 Feb 2020 • Xin Guo, Luisa F. Polanía, Kenneth E. Barner
This paper presents an audiovisual-based emotion recognition hybrid network.
no code implementations • 22 Oct 2019 • Xinjie Lan, Kenneth E. Barner
Generalization is essential for deep learning.
no code implementations • 25 Sep 2019 • Xinjie Lan, Kenneth E. Barner
Based on the probabilistic representation, we demonstrate that the entire architecture of DNNs can be explained as a Bayesian hierarchical model.
1 code implementation • 19 Sep 2019 • Xin Guo, Luisa F. Polania, Bin Zhu, Charles Boncelet, Kenneth E. Barner
A graph neural network (GNN) for image understanding based on multiple cues is proposed in this paper.
no code implementations • 26 Aug 2019 • Xinjie Lan, Kenneth E. Barner
In this work, we introduce a novel probabilistic representation of deep learning, which provides an explicit explanation for the Deep Neural Networks (DNNs) in three aspects: (i) neurons define the energy of a Gibbs distribution; (ii) the hidden layers of DNNs formulate Gibbs distributions; and (iii) the whole architecture of DNNs can be interpreted as a Bayesian neural network.
no code implementations • 17 Jan 2018 • Xin Guo, Luisa F. Polanía, Kenneth E. Barner
Compared to the size of databases for face recognition, far less labeled data is available for training smile detection systems.
no code implementations • 30 May 2017 • Luisa F. Polania, Kenneth E. Barner
This paper proposes a CS scheme that exploits the representational power of restricted Boltzmann machines and deep learning architectures to model the prior distribution of the sparsity pattern of signals belonging to the same class.