no code implementations • 19 Apr 2024 • Diego Calanzone, Stefano Teso, Antonio Vergari
Large language models (LLMs) are a promising venue for natural language understanding and generation tasks.
no code implementations • 12 Apr 2024 • Emile van Krieken, Pasquale Minervini, Edoardo M. Ponti, Antonio Vergari
Many such systems assume that the probabilities of the considered symbols are conditionally independent given the input to simplify learning and reasoning.
1 code implementation • 19 Feb 2024 • Emanuele Marconato, Samuele Bortolotti, Emile van Krieken, Antonio Vergari, Andrea Passerini, Stefano Teso
Neuro-Symbolic (NeSy) predictors that conform to symbolic knowledge - encoding, e. g., safety constraints - can be affected by Reasoning Shortcuts (RSs): They learn concepts consistent with the symbolic knowledge by exploiting unintended semantics.
no code implementations • 6 Jan 2024 • Yintao Tai, Xiyang Liao, Alessandro Suglia, Antonio Vergari
However, these pixel-based LLMs are limited to discriminative tasks (e. g., classification) and, similar to BERT, cannot be used to generate text.
no code implementations • 7 Nov 2023 • Filippo Corponi, Bryan M. Li, Gerard Anmella, Clàudia Valenzuela-Pascual, Ariadna Mas, Isabella Pacchiarotti, Marc Valentí, Iria Grande, Antonio Benabarre, Marina Garriga, Eduard Vieta, Allan H Young, Stephen M. Lawrie, Heather C. Whalley, Diego Hidalgo-Mazzei, Antonio Vergari
In this paper, we overcome this data bottleneck and advance the detection of MDs acute episode vs stable state from wearables data on the back of recent advances in self-supervised learning (SSL).
no code implementations • 25 Oct 2023 • Gennaro Gala, Cassio de Campos, Robert Peharz, Antonio Vergari, Erik Quaeghebeur
In contrast, probabilistic circuits (PCs) are hierarchical discrete mixtures represented as computational graphs composed of input, sum and product units.
1 code implementation • 16 Oct 2023 • Andreas Grivas, Antonio Vergari, Adam Lopez
We then show that they can be prevented in practice by introducing a Discrete Fourier Transform (DFT) output layer, which guarantees that all sparse label combinations with up to $k$ active labels are argmaxable.
2 code implementations • 1 Oct 2023 • Lorenzo Loconte, Aleksanteri M. Sladek, Stefan Mengel, Martin Trapp, Arno Solin, Nicolas Gillis, Antonio Vergari
Mixture models are traditionally represented and learned by adding several distributions as components.
1 code implementation • 31 May 2023 • Aryo Pradipta Gema, Dominik Grabarczyk, Wolf De Wulf, Piyush Borole, Javier Antonio Alfaro, Pasquale Minervini, Antonio Vergari, Ajitha Rajan
We achieve a three-fold improvement in terms of performance based on the HITS@10 score over previous work on the same biomedical knowledge graph.
1 code implementation • NeurIPS 2023 • Emanuele Marconato, Stefano Teso, Antonio Vergari, Andrea Passerini
Neuro-Symbolic (NeSy) predictive models hold the promise of improved compliance with given constraints, systematic generalization, and interpretability, as they allow to infer labels that are consistent with some prior knowledge by reasoning over high-level concepts extracted from sub-symbolic inputs.
1 code implementation • NeurIPS 2023 • Lorenzo Loconte, Nicola Di Mauro, Robert Peharz, Antonio Vergari
Some of the most successful knowledge graph embedding (KGE) models for link prediction -- CP, RESCAL, TuckER, ComplEx -- can be interpreted as energy-based models.
Ranked #3 on Link Property Prediction on ogbl-biokg
1 code implementation • 16 Mar 2023 • Kamil Faber, Dominik Zurek, Marcin Pietron, Nathalie Japkowicz, Antonio Vergari, Roberto Corizzo
Continual learning (CL) is one of the most promising trends in recent machine learning research.
no code implementations • 5 Oct 2022 • Andrea Valenti, Davide Bacciu, Antonio Vergari
Measuring the robustness of reasoning in machine learning models is challenging as one needs to provide a task that cannot be easily shortcut by exploiting spurious statistical correlations in the data, while operating on complex objects and constraints.
1 code implementation • 1 Jun 2022 • Kareem Ahmed, Stefano Teso, Kai-Wei Chang, Guy Van Den Broeck, Antonio Vergari
We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
no code implementations • 17 Feb 2022 • Stefano Teso, Antonio Vergari
In this position paper, we study interactive learning for structured output spaces, with a focus on active learning, in which labels are unknown and must be acquired, and on skeptical learning, in which the labels are noisy and may need relabeling.
1 code implementation • AKBC 2021 • Agnieszka Dobrowolska, Antonio Vergari, Pasquale Minervini
In this work, we investigate how to learn novel concepts in Knowledge Graphs (KGs) in a principled way, and how to effectively exploit them to produce more accurate neural link prediction models.
1 code implementation • NeurIPS 2021 • Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van Den Broeck
Circuit representations are becoming the lingua franca to express and reason about tractable generative and discriminative models.
1 code implementation • 21 Feb 2021 • Wenzhe Li, Zhe Zeng, Antonio Vergari, Guy Van Den Broeck
Computing the expectation of kernel functions is a ubiquitous task in machine learning, with applications from classical support vector machines to exploiting kernel embeddings of distributions in probabilistic modeling, statistical inference, causal discovery, and deep learning.
no code implementations • NeurIPS 2021 • Antonio Vergari, YooJung Choi, Anji Liu, Stefano Teso, Guy Van Den Broeck
Circuit representations are becoming the lingua franca to express and reason about tractable generative and discriminative models.
no code implementations • EACL 2021 • Alessandro Suglia, Yonatan Bisk, Ioannis Konstas, Antonio Vergari, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon
Guessing games are a prototypical instance of the "learning by interacting" paradigm.
no code implementations • NeurIPS 2020 • Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van Den Broeck
Weighted model integration (WMI) is a framework to perform advanced probabilistic inference on hybrid domains, i. e., on distributions over mixed continuous-discrete random variables and in presence of complex logical and arithmetic constraints.
no code implementations • COLING 2020 • Alessandro Suglia, Antonio Vergari, Ioannis Konstas, Yonatan Bisk, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon
However, as shown by Suglia et al. (2020), existing models fail to learn truly multi-modal representations, relying instead on gold category labels for objects in the scene both at training and inference time.
1 code implementation • 18 Jul 2020 • Meihua Dang, Antonio Vergari, Guy Van Den Broeck
Probabilistic circuits (PCs) represent a probability distribution as a computational graph.
no code implementations • 29 Jun 2020 • Pasha Khosravi, Antonio Vergari, YooJung Choi, Yitao Liang, Guy Van Den Broeck
As such, handling missing data in decision trees is a well studied problem.
1 code implementation • ICML 2020 • Robert Peharz, Steven Lang, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Guy Van Den Broeck, Kristian Kersting, Zoubin Ghahramani
Probabilistic circuits (PCs) are a promising avenue for probabilistic modeling, as they permit a wide range of exact and efficient inference routines.
1 code implementation • ICML 2020 • Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van Den Broeck
Weighted model integration (WMI) is a very appealing framework for probabilistic inference: it allows to express the complex dependencies of real-world problems where variables are both continuous and discrete, via the language of Satisfiability Modulo Theories (SMT), as well as to compute probabilistic queries with complex logical and arithmetic constraints.
1 code implementation • NeurIPS 2019 • Pasha Khosravi, YooJung Choi, Yitao Liang, Antonio Vergari, Guy Van Den Broeck
In this paper, we identify a pair of generative and discriminative models that enables tractable computation of expectations, as well as moments of any order, of the latter with respect to the former in case of regression.
no code implementations • 20 Sep 2019 • Zhe Zeng, Fanqi Yan, Paolo Morettin, Antonio Vergari, Guy Van Den Broeck
Weighted model integration (WMI) is a very appealing framework for probabilistic inference: it allows to express the complex dependencies of real-world hybrid scenarios where variables are heterogeneous in nature (both continuous and discrete) via the language of Satisfiability Modulo Theories (SMT); as well as computing probabilistic queries with arbitrarily complex logical constraints.
no code implementations • 21 May 2019 • Xiaoting Shao, Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Thomas Liebig, Kristian Kersting
In contrast, deep probabilistic models such as sum-product networks (SPNs) capture joint distributions in a tractable fashion, but still lack the expressive power of intractable models based on deep neural networks.
4 code implementations • ICLR 2020 • Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Schölkopf
Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models.
1 code implementation • 11 Jan 2019 • Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Pranav Subramani, Nicola Di Mauro, Pascal Poupart, Kristian Kersting
We introduce SPFlow, an open-source Python library providing a simple interface to inference, learning and manipulation routines for deep and tractable probabilistic models called Sum-Product Networks (SPNs).
no code implementations • 24 Jul 2018 • Antonio Vergari, Alejandro Molina, Robert Peharz, Zoubin Ghahramani, Kristian Kersting, Isabel Valera
Classical approaches for {exploratory data analysis} are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference.
no code implementations • 5 Jun 2018 • Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Kristian Kersting, Zoubin Ghahramani
The need for consistent treatment of uncertainty has recently triggered increased interest in probabilistic deep learning methods.
no code implementations • 9 Oct 2017 • Alejandro Molina, Antonio Vergari, Nicola Di Mauro, Sriraam Natarajan, Floriana Esposito, Kristian Kersting
While all kinds of mixed data -from personal data, over panel and scientific data, to public and commercial data- are collected and stored, building probabilistic graphical models for these hybrid domains becomes more difficult.
no code implementations • 29 Aug 2016 • Antonio Vergari, Nicola Di Mauro, Floriana Esposito
Sum-Product Networks (SPNs) are recently introduced deep tractable probabilistic models by which several kinds of inference queries can be answered exactly and in a tractable time.
no code implementations • 8 Aug 2016 • Antonio Vergari, Nicola Di Mauro, Floriana Esposito
Probabilistic models learned as density estimators can be exploited in representation learning beside being toolboxes used to answer inference queries only.