5 code implementations • Nature 2021 • John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, Demis Hassabis
Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics.
2 code implementations • 25 Jun 2019 • Shakir Mohamed, Mihaela Rosca, Michael Figurnov, andriy mnih
This paper is a broad and accessible survey of the methods we have at our disposal for Monte Carlo gradient estimation in machine learning and across the statistical sciences: the problem of computing the gradient of an expectation of a function with respect to parameters defining the distribution that is integrated; the problem of sensitivity analysis.
3 code implementations • ICLR 2019 • Oleg Ivanov, Michael Figurnov, Dmitry Vetrov
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot".
1 code implementation • NeurIPS 2018 • Michael Figurnov, Shakir Mohamed, andriy mnih
By providing a simple and efficient way of computing low-variance gradients of continuous random variables, the reparameterization trick has become the technique of choice for training a variety of latent variable models.
2 code implementations • 5 Jan 2018 • Alexander Novikov, Pavel Izmailov, Valentin Khrulkov, Michael Figurnov, Ivan Oseledets
Tensor Train decomposition is used across many branches of machine learning.
Mathematical Software Numerical Analysis
no code implementations • 1 Dec 2017 • Michael Figurnov, Artem Sobolev, Dmitry Vetrov
We present a probabilistic model with discrete latent variables that control the computation time in deep learning models such as ResNets and LSTMs.
1 code implementation • CVPR 2017 • Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, Ruslan Salakhutdinov
This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image.
no code implementations • 28 Nov 2016 • Michael Figurnov, Kirill Struminsky, Dmitry Vetrov
Variational inference is a powerful tool for approximate inference.
2 code implementations • NeurIPS 2016 • Michael Figurnov, Aijan Ibraimova, Dmitry Vetrov, Pushmeet Kohli
We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones.