no code implementations • 5 Nov 2018 • Nikolas Ioannou, Celestine Dünner, Kornilios Kourtis, Thomas Parnell
The combined set of optimizations result in a consistent bottom line speedup in convergence of up to 12x compared to the initial asynchronous parallel training algorithm and up to 42x, compared to state of the art implementations (scikit-learn and h2o) on a range of multi-core CPU architectures.
no code implementations • ICML 2018 • Celestine Dünner, Aurelien Lucchi, Matilde Gargiani, An Bian, Thomas Hofmann, Martin Jaggi
Due to the rapid growth of data and computational resources, distributed optimization has become an active research area in recent years.
no code implementations • NeurIPS 2018 • Celestine Dünner, Thomas Parnell, Dimitrios Sarigiannis, Nikolas Ioannou, Andreea Anghel, Gummadi Ravi, Madhusudanan Kandasamy, Haralampos Pozidis
We describe a new software framework for fast training of generalized linear models.
1 code implementation • NeurIPS 2017 • Celestine Dünner, Thomas Parnell, Martin Jaggi
We propose a generic algorithmic building block to accelerate training of machine learning models on heterogeneous compute systems.
no code implementations • 22 Feb 2017 • Thomas Parnell, Celestine Dünner, Kubilay Atasu, Manolis Sifalakis, Haris Pozidis
In this work we propose an accelerated stochastic learning system for very large-scale applications.
no code implementations • 5 Dec 2016 • Celestine Dünner, Thomas Parnell, Kubilay Atasu, Manolis Sifalakis, Haralampos Pozidis
We begin by analyzing the characteristics of a state-of-the-art distributed machine learning algorithm implemented in Spark and compare it to an equivalent reference implementation using the high performance computing framework MPI.
1 code implementation • 7 Apr 2016 • Reinhard Heckel, Michail Vlachos, Thomas Parnell, Celestine Dünner
We consider the problem of generating interpretable recommendations by identifying overlapping co-clusters of clients and products, based only on positive or implicit feedback.
no code implementations • 16 Feb 2016 • Celestine Dünner, Simone Forte, Martin Takáč, Martin Jaggi
We propose an algorithm-independent framework to equip existing optimization methods with primal-dual certificates.