no code implementations • 25 Mar 2024 • Busra Asan, Abdullah Akgül, Alper Unal, Melih Kandemir, Gozde Unal
Seasonal forecasting is a crucial task when it comes to detecting the extreme heat and colds that occur due to climate change.
no code implementations • 5 Feb 2024 • Bahareh Tasdighi, Nicklas Werge, Yi-Shan Wu, Melih Kandemir
We introduce Probabilistic Actor-Critic (PAC), a novel reinforcement learning algorithm with improved continuous control performance thanks to its ability to mitigate the exploration-exploitation trade-off.
no code implementations • 19 Oct 2023 • Aritra Dutta, El Houcine Bergou, Soumia Boucherouite, Nicklas Werge, Melih Kandemir, Xin Li
Additionally, our analyses allow us to measure the density of the $\epsilon$-stationary points in the final iterates of SGD, and we recover the classical $O(\frac{1}{\sqrt{T}})$ asymptotic rate under various existing assumptions on the objective function and the bounds on the stochastic gradient.
1 code implementation • 9 Oct 2023 • Gulcin Baykal, Melih Kandemir, Gozde Unal
We evidentially monitor the significance of attaining the probability distribution over the codebook embeddings, in contrast to softmax usage.
no code implementations • 15 Sep 2023 • Juliane Weilbach, Sebastian Gerwinn, Melih Kandemir, Martin Fraenzle
This ambiguity is particularly challenging in continuous settings in which a continuum of explanations exist for the same observation.
no code implementations • 15 Sep 2023 • Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters
Many real-world dynamical systems can be described as State-Space Models (SSMs).
no code implementations • 7 Jul 2023 • Nicklas Werge, Abdullah Akgül, Melih Kandemir
We propose a novel Bayesian-Optimistic Frequentist Upper Confidence Bound (BOF-UCB) algorithm for stochastic contextual linear bandits in non-stationary environments.
1 code implementation • 2 May 2023 • Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters
Furthermore, we propose structured approximations to the covariance matrices of the Gaussian components in order to scale up to systems with many agents.
no code implementations • 30 Jan 2023 • Bahareh Tasdighi, Abdullah Akgül, Kenny Kazimirzak Brink, Melih Kandemir
Actor-critic algorithms address the dual goals of reinforcement learning (RL), policy evaluation and improvement, via two separate function approximators.
no code implementations • 29 Nov 2022 • Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters
On the one hand, we found that PAC-Bayes bounds are a useful tool for designing offline bandit algorithms with performance guarantees.
1 code implementation • 24 May 2022 • Çağatay Yıldız, Melih Kandemir, Barbara Rakitsch
We study time uncertainty-aware modeling of continuous-time dynamics of interacting objects.
no code implementations • 7 Mar 2022 • Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters
We present a PAC-Bayesian analysis of lifelong learning.
no code implementations • 2 Mar 2022 • Abdullah Akgül, Gozde Unal, Melih Kandemir
The learning model is aware when a new mode appears, but it cannot access the true modes of individual training sequences.
no code implementations • 6 Dec 2021 • Krista Longi, Jakob Lindinger, Olaf Duennbier, Melih Kandemir, Arto Klami, Barbara Rakitsch
These models have a natural interpretation as discretized stochastic differential equations, but inference for long sequences with fast and slow transitions is difficult.
no code implementations • 5 Jul 2021 • Juliane Weilbach, Sebastian Gerwinn, Christian Weilbach, Melih Kandemir
Understanding physical phenomena oftentimes means understanding the underlying dynamical system that governs observational measurements.
2 code implementations • ICLR 2022 • Melih Kandemir, Abdullah Akgül, Manuel Haussmann, Gozde Unal
A probabilistic classifier with reliable predictive uncertainties i) fits successfully to the target domain data, ii) provides calibrated class probabilities in difficult regions of the target domain (e. g.\ class overlap), and iii) accurately identifies queries coming out of the target domain and rejects them.
no code implementations • 14 Oct 2020 • Andreas Look, Simona Doneva, Melih Kandemir, Rainer Gemulla, Jan Peters
In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions.
no code implementations • 17 Jun 2020 • Manuel Haussmann, Sebastian Gerwinn, Andreas Look, Barbara Rakitsch, Melih Kandemir
Neural Stochastic Differential Equations model a dynamical environment with neural nets assigned to their drift and diffusion terms.
no code implementations • 16 Jun 2020 • Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters
Our deterministic approximation of the transition kernel is applicable to both training and prediction.
no code implementations • 2 Dec 2019 • Andreas Look, Melih Kandemir
Neural Ordinary Differential Equations (N-ODEs) are a powerful building block for learning systems, which extend residual networks to a continuous-time dynamical system.
1 code implementation • 27 Jun 2019 • Manuel Haussmann, Fred A. Hamprecht, Melih Kandemir
As active learning is a scarce data regime, we bootstrap from a well-known heuristic that filters the bulk of data points on which all heuristics would agree, and learn a policy to warp the top portion of this ranking in the most beneficial way for the character of a specific data distribution.
1 code implementation • pproximateinference AABI Symposium 2021 • Manuel Haussmann, Sebastian Gerwinn, Melih Kandemir
We propose a novel method for closed-form predictive distribution modeling with neural nets.
10 code implementations • NeurIPS 2018 • Murat Sensoy, Lance Kaplan, Melih Kandemir
Deterministic neural nets have been shown to learn effective predictors on a wide range of machine learning problems.
1 code implementation • 19 May 2018 • Manuel Haussmann, Fred A. Hamprecht, Melih Kandemir
We propose a new Bayesian Neural Net formulation that affords variational inference for which the evidence lower bound is analytically tractable subject to a tight approximation.
1 code implementation • CVPR 2017 • Manuel Haussmann, Fred A. Hamprecht, Melih Kandemir
Gaussian Processes (GPs) are effective Bayesian predictors.