no code implementations • 16 Nov 2020 • Thomas Schmied, Diego Didona, Andreas Döring, Thomas Parnell, Nikolas Ioannou
Machine learning (ML) methods have recently emerged as an effective way to perform automated parameter tuning of databases.
2 code implementations • NeurIPS 2020 • Thomas Parnell, Andreea Anghel, Malgorzata Lazuka, Nikolas Ioannou, Sebastian Kurella, Peshal Agarwal, Nikolaos Papandreou, Haralampos Pozidis
At each boosting iteration, their goal is to find the base hypothesis, selected from some base hypothesis class, that is closest to the Newton descent direction in a Euclidean sense.
1 code implementation • 5 Mar 2020 • Kornilios Kourtis, Martino Dazzi, Nikolas Ioannou, Tobias Grosser, Abu Sebastian, Evangelos Eleftheriou
Computational memory (CM) is a promising approach for accelerating inference on neural networks (NN) by using enhanced memories that, in addition to storing data, allow computations on them.
no code implementations • NeurIPS 2019 • Nikolas Ioannou, Celestine Mendler-Dünner, Thomas Parnell
In this paper we propose a novel parallel stochastic coordinate descent (SCD) algorithm with convergence guarantees that exhibits strong scalability.
no code implementations • 15 Oct 2019 • Andreea Anghel, Nikolas Ioannou, Thomas Parnell, Nikolaos Papandreou, Celestine Mendler-Dünner, Haris Pozidis
In this paper we analyze, evaluate, and improve the performance of training Random Forest (RF) models on modern CPU architectures.
no code implementations • 5 Nov 2018 • Nikolas Ioannou, Celestine Dünner, Kornilios Kourtis, Thomas Parnell
The combined set of optimizations result in a consistent bottom line speedup in convergence of up to 12x compared to the initial asynchronous parallel training algorithm and up to 42x, compared to state of the art implementations (scikit-learn and h2o) on a range of multi-core CPU architectures.
no code implementations • NeurIPS 2018 • Celestine Dünner, Thomas Parnell, Dimitrios Sarigiannis, Nikolas Ioannou, Andreea Anghel, Gummadi Ravi, Madhusudanan Kandasamy, Haralampos Pozidis
We describe a new software framework for fast training of generalized linear models.