no code implementations • 2 May 2024 • Pedro Mendes, Paolo Romano, David Garlan
In this work, we present a novel technique, named Error-Driven Uncertainty Aware Training (EUAT), which aims to enhance the ability of neural models to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate.
no code implementations • 28 Jul 2023 • Tiago Leon Melo, João Bravo, Marco O. P. Sampaio, Paolo Romano, Hugo Ferreira, João Tiago Ascensão, Pedro Bizarro
Adversarial attacks are a major concern in security-centered applications, where malicious actors continuously try to mislead Machine Learning (ML) models into wrongly classifying fraudulent activity as legitimate, whereas system maintainers try to stop them.
1 code implementation • 5 Apr 2023 • Pedro Mendes, Paolo Romano, David Garlan
This work focuses on the problem of hyper-parameter tuning (HPT) for robust (i. e., adversarially trained) models, shedding light on the new challenges and opportunities arising during the HPT process for robust models.
1 code implementation • 5 Aug 2021 • Pedro Mendes, Maria Casimiro, Paolo Romano, David Garlan
In the literature on hyper-parameter tuning, a number of recent solutions rely on low-fidelity observations (e. g., training with sub-sampled datasets) in order to efficiently identify promising configurations to be then tested via high-fidelity observations (e. g., using the full dataset).
no code implementations • 9 Nov 2020 • Pedro Mendes, Maria Casimiro, Paolo Romano, David Garlan
This work introduces TrimTuner, the first system for optimizing machine learning jobs in the cloud to exploit sub-sampling techniques to reduce the cost of the optimization process while keeping into account user-specified constraints.
2 code implementations • 6 Mar 2020 • David Gureya, João Neto, Reza Karimi, João Barreto, Pramod Bhatotia, Vivien Quema, Rodrigo Rodrigues, Paolo Romano, Vladimir Vlassov
Page placement is a critical problem for memoryintensive applications running on a shared-memory multiprocessor with a non-uniform memory access (NUMA) architecture.
Distributed, Parallel, and Cluster Computing
no code implementations • 19 Oct 2014 • Diego Didona, Paolo Romano
Performance modeling typically relies on two antithetic methodologies: white box models, which exploit knowledge on system's internals and capture its dynamics using analytical approaches, and black box techniques, which infer relations among the input and output variables of a system based on the evidences gathered during an initial training phase.