no code implementations • 16 Dec 2022 • Ashish Kumar, Ilya Kuzovkin
Although offline learning techniques can learn from data generated by a sub-optimal behavior agent, there is still an opportunity to improve the sample complexity of existing offline reinforcement learning algorithms by strategically introducing human demonstration data into the training process.
1 code implementation • 17 Oct 2020 • Ilya Kuzovkin
Combined with interpretability techniques, machine learning could replace human modeler and shift the focus of human effort to extracting the knowledge from the ready-made models and articulating that knowledge into intuitive descroptions of reality.
no code implementations • 18 Oct 2019 • Ashish Kumar, Toby Buckley, John B. Lanier, Qiaozhi Wang, Alicia Kavelaars, Ilya Kuzovkin
The challenge that the community sets as a benchmark is usually the challenge that the community eventually solves.
1 code implementation • 19 Jul 2019 • Anti Ingel, Ilya Kuzovkin, Raul Vicente
The proposed method shows good performance in classifying targets of a BCI, outperforming previously reported results on the same dataset by a factor of 2 in terms of ITR.
2 code implementations • NeurIPS 2019 • Himanshu Sahni, Toby Buckley, Pieter Abbeel, Ilya Kuzovkin
In this work, we show how visual trajectories can be hallucinated to appear successful by altering agent observations using a generative model trained on relatively few snapshots of the goal.
1 code implementation • 20 Dec 2017 • Anna Leontjeva, Ilya Kuzovkin
In real-life scenarios, however, it is often the case that both static and dynamic features are present, or can be extracted from the data.
4 code implementations • 27 Nov 2015 • Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, Raul Vicente
In the present work we extend the Deep Q-Learning Network architecture proposed by Google DeepMind to multiagent environments and investigate how two agents controlled by independent Deep Q-Networks interact in the classic videogame Pong.