no code implementations • 17 May 2024 • Erdem Kuş, Özgür Akgün, Nguyen Dang, Ian Miguel
In this work, we explore reducing this cost by choosing a subset of the training instances on which to train.
no code implementations • 23 Feb 2023 • Deyao Chen, Maxim Buzdalov, Carola Doerr, Nguyen Dang
Dynamic Algorithm Configuration (DAC) tackles the question of how to automatically learn policies to control parameters of algorithms in a data-driven fashion.
1 code implementation • 30 May 2022 • Nguyen Dang
Competitions such as the MiniZinc Challenges or the SAT competitions have been very useful sources for comparing performance of different solving approaches and for advancing the state-of-the-arts of the fields.
1 code implementation • 29 May 2022 • Nguyen Dang, Özgür Akgün, Joan Espasa, Ian Miguel, Peter Nightingale
This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches.
1 code implementation • 7 Feb 2022 • André Biedenkapp, Nguyen Dang, Martin S. Krejca, Frank Hutter, Carola Doerr
We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values.
no code implementations • 23 Sep 2020 • Gökberk Koçak, Özgür Akgün, Nguyen Dang, Ian Miguel
The contribution of this work is to enable a native interaction between SAT solvers and the automated modelling system Savile Row to support efficient incremental modelling and solving.
no code implementations • 21 Sep 2020 • Patrick Spracklen, Nguyen Dang, Özgür Akgün, Ian Miguel
Augmenting a base constraint model with additional constraints can strengthen the inferences made by a solver and therefore reduce search effort.
no code implementations • 21 Sep 2020 • Özgür Akgün, Nguyen Dang, Joan Espasa, Ian Miguel, András Z. Salamon, Christopher Stone
Many of the core disciplines of artificial intelligence have sets of standard benchmark problems well known and widely used by the community when developing new algorithms.
1 code implementation • 9 Apr 2019 • Nguyen Dang, Carola Doerr
It is known that the $(1+(\lambda,\lambda))$~Genetic Algorithm (GA) with self-adjusting parameter choices achieves a linear expected optimization time on OneMax if its hyper-parameters are suitably chosen.