no code implementations • 11 Nov 2022 • Jing Yang, Jie Shen, Yiming Lin, Yordan Hristov, Maja Pantic
Our model consists of a hybrid network of convolution and transformer blocks to learn per-AU features and to model AU co-occurrences.
no code implementations • ICLR 2021 • Yordan Hristov, Subramanian Ramamoorthy
We show that such alignment is best achieved through the use of labels from the end user, in an appropriately restricted vocabulary, in contrast to the conventional approach of the designer picking a prior over the latent variables.
no code implementations • 31 Jul 2019 • Yordan Hristov, Daniel Angelov, Michael Burke, Alex Lascarides, Subramanian Ramamoorthy
Learning from demonstration is an effective method for human users to instruct desired robot behaviour.
no code implementations • 18 Jul 2019 • Daniel Angelov, Yordan Hristov, Michael Burke, Subramanian Ramamoorthy
Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics.
1 code implementation • 9 Jul 2019 • Michael Burke, Yordan Hristov, Subramanian Ramamoorthy
This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification.
no code implementations • 24 Jun 2019 • Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy
Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke primitives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states.
Robotics
no code implementations • 4 Mar 2019 • Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy
In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space.
no code implementations • 17 Jul 2018 • Yordan Hristov, Alex Lascarides, Subramanian Ramamoorthy
Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world.
no code implementations • WS 2017 • Yordan Hristov, Svetlin Penkov, Alex Lascarides, Subramanian Ramamoorthy
As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability---for instance, learning to ground symbols in the physical world.