no code implementations • 29 May 2024 • Abishek Sriramulu, Nicolas Fourrier, Christoph Bergmeir
Graph Neural Networks (GNN) have gained significant traction in the forecasting domain, especially for their capacity to simultaneously account for intra-series temporal correlations and inter-series relationships.
1 code implementation • 6 Dec 2023 • Abishek Sriramulu, Nicolas Fourrier, Christoph Bergmeir
In this paper, we propose a hybrid approach combining neural networks and statistical structure learning models to self-learn the dependencies and construct a dynamically changing dependency graph from multivariate data aiming to enable the use of GNNs for multivariate forecasting even when a well-defined graph does not exist.
no code implementations • 29 Sep 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen
Neural networks usually excel in learning a single task.
1 code implementation • 6 Mar 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution.
no code implementations • 1 Jan 2021 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.
1 code implementation • 18 Jul 2020 • Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen
Learning to learn (L2L) trains a meta-learner to assist the learning of a task-specific base learner.
no code implementations • 25 Sep 2019 • Nicholas I-Hsien Kuo, Mehrtash T. Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen
This paper contrasts the two canonical recurrent neural networks (RNNs) of long short-term memory (LSTM) and gated recurrent unit (GRU) to propose our novel light-weight RNN of Extrapolated Input for Network Simplification (EINS).
no code implementations • 27 Sep 2018 • Nicholas I.H. Kuo, Mehrtash T. Harandi, Hanna Suominen, Nicolas Fourrier, Christian Walder, Gabriela Ferraro
It is unclear whether the extensively applied long-short term memory (LSTM) is an optimised architecture for recurrent neural networks.