no code implementations • 22 Feb 2024 • Christian Toth, Christian Knoll, Franz Pernkopf, Robert Peharz
Specifically, we decompose the problem of inferring the causal structure into (i) inferring a topological order over variables and (ii) inferring the parent sets for each variable.
no code implementations • 25 Oct 2023 • Gennaro Gala, Cassio de Campos, Robert Peharz, Antonio Vergari, Erik Quaeghebeur
In contrast, probabilistic circuits (PCs) are hierarchical discrete mixtures represented as computational graphs composed of input, sum and product units.
1 code implementation • NeurIPS 2023 • Lorenzo Loconte, Nicola Di Mauro, Robert Peharz, Antonio Vergari
Some of the most successful knowledge graph embedding (KGE) models for link prediction -- CP, RESCAL, TuckER, ComplEx -- can be interpreted as energy-based models.
Ranked #3 on Link Property Prediction on ogbl-biokg
1 code implementation • 23 Feb 2023 • Yang Yang, Gennaro Gala, Robert Peharz
Probabilistic circuits (PCs) are a prominent representation of probability distributions with tractable inference.
1 code implementation • 21 Sep 2022 • Alvaro H. C. Correia, Gennaro Gala, Erik Quaeghebeur, Cassio de Campos, Robert Peharz
Meanwhile, tractable probabilistic models such as probabilistic circuits (PCs) can be understood as hierarchical discrete mixture models, and thus are capable of performing exact inference efficiently but often show subpar performance in comparison to continuous latent-space models.
1 code implementation • 4 Jun 2022 • Christian Toth, Lars Lorch, Christian Knoll, Andreas Krause, Franz Pernkopf, Robert Peharz, Julius von Kügelgen
In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, which jointly infers a posterior over causal models and queries of interest.
1 code implementation • 11 Jul 2020 • Alvaro H. C. Correia, Robert Peharz, Cassio de Campos
Decision Trees and Random Forests are among the most widely used machine learning models, and often achieve state-of-the-art performance in tabular, domain-agnostic datasets.
1 code implementation • NeurIPS 2020 • Alvaro H. C. Correia, Robert Peharz, Cassio de Campos
Decision Trees (DTs) and Random Forests (RFs) are powerful discriminative learners and tools of central importance to the everyday machine learning practitioner and data scientist.
1 code implementation • ICML 2020 • Robert Peharz, Steven Lang, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Guy Van Den Broeck, Kristian Kersting, Zoubin Ghahramani
Probabilistic circuits (PCs) are a promising avenue for probabilistic modeling, as they permit a wide range of exact and efficient inference routines.
no code implementations • 7 Jan 2020 • Wolfgang Roth, Günther Schindler, Bernhard Klein, Robert Peharz, Sebastian Tschiatschek, Holger Fröning, Franz Pernkopf, Zoubin Ghahramani
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches.
no code implementations • 20 Dec 2019 • Cory J. Butz, Jhonatan S. Oliveira, Robert Peharz
Due to this dichotomy, tools to convert between BNs and SPNs are desirable.
1 code implementation • 10 Oct 2019 • Martin Trapp, Robert Peharz, Franz Pernkopf, Carl E. Rasmussen
Gaussian Processes (GPs) are powerful non-parametric Bayesian regression models that allow exact posterior inference, but exhibit high computational and memory costs.
1 code implementation • NeurIPS 2019 • Martin Trapp, Robert Peharz, Hong Ge, Franz Pernkopf, Zoubin Ghahramani
While parameter learning in SPNs is well developed, structure learning leaves something to be desired: Even though there is a plethora of SPN structure learners, most of them are somewhat ad-hoc and based on intuition rather than a clear learning principle.
no code implementations • 21 May 2019 • Xiaoting Shao, Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Thomas Liebig, Kristian Kersting
In contrast, deep probabilistic models such as sum-product networks (SPNs) capture joint distributions in a tractable fashion, but still lack the expressive power of intractable models based on deep neural networks.
1 code implementation • 20 May 2019 • Martin Trapp, Robert Peharz, Franz Pernkopf
It seems to be a pearl of conventional wisdom that parameter learning in deep sum-product networks is surprisingly fast compared to shallow mixture models.
1 code implementation • 11 Jan 2019 • Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Pranav Subramani, Nicola Di Mauro, Pascal Poupart, Kristian Kersting
We introduce SPFlow, an open-source Python library providing a simple interface to inference, learning and manipulation routines for deep and tractable probabilistic models called Sum-Product Networks (SPNs).
no code implementations • 5 Dec 2018 • Franz Pernkopf, Wolfgang Roth, Matthias Zoehrer, Lukas Pfeifenberger, Guenther Schindler, Holger Froening, Sebastian Tschiatschek, Robert Peharz, Matthew Mattina, Zoubin Ghahramani
In that way, we provide an extensive overview of the current state-of-the-art of robust and efficient machine learning for real-world systems.
2 code implementations • ICLR 2019 • Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato
While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.
1 code implementation • 12 Sep 2018 • Martin Trapp, Robert Peharz, Carl E. Rasmussen, Franz Pernkopf
In this paper, we introduce a natural and expressive way to tackle these problems, by incorporating GPs in sum-product networks (SPNs), a recently proposed tractable probabilistic model allowing exact and efficient inference.
no code implementations • 24 Jul 2018 • Antonio Vergari, Alejandro Molina, Robert Peharz, Zoubin Ghahramani, Kristian Kersting, Isabel Valera
Classical approaches for {exploratory data analysis} are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference.
no code implementations • 5 Jun 2018 • Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Kristian Kersting, Zoubin Ghahramani
The need for consistent treatment of uncertainty has recently triggered increased interest in probabilistic deep learning methods.
1 code implementation • 10 Oct 2017 • Martin Trapp, Tamas Madl, Robert Peharz, Franz Pernkopf, Robert Trappl
In several domains obtaining class annotations is expensive while at the same time unlabelled data are abundant.
no code implementations • 22 Jan 2016 • Robert Peharz, Robert Gens, Franz Pernkopf, Pedro Domingos
We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks.