no code implementations • 1 May 2024 • Enrico Lopedoto, Maksim Shekhunov, Vitaly Aksenov, Kizito Salako, Tillman Weyde
Our regularizer, called DLoss, penalises differences between the model's derivatives and derivatives of the data generating function as estimated from the training data.
no code implementations • 27 Nov 2023 • Szymon Kubiak, Tillman Weyde, Oleksandr Galkin, Dan Philps, Ram Gopal
We present a novel process for generating synthetic datasets tailored to assess asset allocation methods and construct portfolios within the fixed income universe.
no code implementations • 22 May 2023 • David Herron, Ernesto Jiménez-Ruiz, Giacomo Tarroni, Tillman Weyde
NeSy4VRD is a multifaceted resource designed to support the development of neurosymbolic AI (NeSy) research.
no code implementations • 7 Apr 2023 • Nadine El-Naggar, Pranava Madhyastha, Tillman Weyde
We conduct a theoretical analysis of linear RNNs and identify conditions for the models to exhibit exact counting behaviour.
no code implementations • 25 Jan 2023 • Chenxi Whitehouse, Tillman Weyde, Pranava Madhyastha
The field of visual question answering (VQA) has recently seen a surge in research focused on providing explanations for predicted answers.
no code implementations • 29 Nov 2022 • Nadine El-Naggar, Pranava Madhyastha, Tillman Weyde
Despite this and some positive empirical results for LSTMs on Dyck-1 languages, our experimental results show that LSTMs fail to learn correct counting behavior for sequences that are significantly longer than in the training data.
no code implementations • 11 May 2022 • Simon Colton, Maria Teresa Llano, Rose Hepworth, John Charnley, Catherine V. Gale, Archie Baron, Francois Pachet, Pierre Roy, Pablo Gervas, Nick Collins, Bob Sturm, Tillman Weyde, Daniel Wolff, James Robert Lloyd
During 2015 and early 2016, the cultural application of Computational Creativity research and practice took a big leap forward, with a project where multiple computational systems were used to provide advice and material for a new musical theatre production.
1 code implementation • 5 Apr 2022 • Eric Guizzo, Tillman Weyde, Simone Scardapane, Danilo Comminiello
On the one hand, the classifier permits to optimize each latent axis of the embeddings for the classification of a specific emotion-related characteristic: valence, arousal, dominance and overall emotion.
1 code implementation • 1 Apr 2022 • Chenxi Whitehouse, Tillman Weyde, Pranava Madhyastha, Nikos Komninos
The predominant state-of-the-art approaches are based on fine-tuning PLMs on labelled fake news datasets.
no code implementations • 10 Mar 2021 • Radha Kopparti, Tillman Weyde
Abstract patterns are the best known examples of a hard problem for neural networks in terms of generalisation to unseen data.
1 code implementation • 11 Jun 2020 • Eric Guizzo, Tillman Weyde, Giacomo Tarroni
While transfer learning assumes that the learning process for a target task will benefit from re-using representations learned for another task, anti-transfer avoids the learning of representations that have been learned for an orthogonal task, i. e., one that is not relevant and potentially misleading for the target task, such as speaker identity for speech recognition or speech content for emotion recognition.
1 code implementation • 6 Mar 2020 • Eric Guizzo, Tillman Weyde, Jack Barnett Leveson
We evaluate MTS and standard convolutional layers in different architectures for emotion recognition from speech audio, using 4 datasets of different sizes.
no code implementations • 6 Mar 2020 • Radha Kopparti, Tillman Weyde
In this work, we extend RBP by realizing it as a Bayesian prior on network weights to model the identity relations.
no code implementations • 11 Nov 2019 • Daniel Philps, Artur d'Avila Garcez, Tillman Weyde
We examine an alternative called Continual Learning (CL), a memory-augmented approach, which can provide transparent explanations, i. e. which memory did what and when.
1 code implementation • 22 Oct 2019 • Joaquin Perez-Lapillo, Oleksandr Galkin, Tillman Weyde
In recent years, deep learning has surpassed traditional approaches to the problem of singing voice separation.
no code implementations • 19 Jun 2019 • Roberto Confalonieri, Tillman Weyde, Tarek R. Besold, Fermín Moscoso del Prado Martín
Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how this influences the understandability of global explanations from the users' perspective.
no code implementations • 13 Jun 2019 • Radha Kopparti, Tillman Weyde
In this work we explore various factors in the neural network architecture and learning process whether they make a difference to the generalisation on equality detection of Neural Networks without and and with DR units in early and mid fusion architectures.
no code implementations • 6 Dec 2018 • Daniel Philps, Tillman Weyde, Artur d'Avila Garcez, Roy Batchelor
Investment decisions can benefit from incorporating an accumulated knowledge of the past to drive future decision making.
no code implementations • 6 Dec 2018 • Tillman Weyde, Radha Manisha Kopparti
We propose a new approach to modify standard neural network architectures, called Relation Based Patterns (RBP) with different variants for classification and prediction.
no code implementations • 4 Dec 2018 • Tillman Weyde, Radha Manisha Kopparti
The DR units create an inductive bias in the networks, so that they do learn to generalise, even from small numbers of examples and we have not found any negative effect of their inclusion in the network.
3 code implementations • 27 Nov 2018 • Craig Macartney, Tillman Weyde
We study the use of the Wave-U-Net architecture for speech enhancement, a model introduced by Stoller et al for the separation of music vocals and accompaniment.
Ranked #25 on Speech Enhancement on VoiceBank + DEMAND
no code implementations • 19 Nov 2018 • Tim Laibacher, Tillman Weyde, Sepehr Jalali
In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems.
1 code implementation • International Society for Music Information Retrieval 2017 • Andreas Jansson, Eric Humphrey, Nicola Montecchio, Rachel Bittner, Aparna Kumar, Tillman Weyde
The decomposition of a music audio signal into its vocal and backing track components is analogous to image-toimage translation, where a mixed spectrogram is transformed into its constituent sources.
Ranked #1 on Speech Separation on iKala
no code implementations • 6 Oct 2017 • Son N. Tran, Srikanth Cherla, Artur Garcez, Tillman Weyde
Also, the experimental results on optical character recognition, part-of-speech tagging and text chunking demonstrate that our model is comparable to recurrent neural networks with complex memory gates while requiring far fewer parameters.
no code implementations • 6 Apr 2016 • Srikanth Cherla, Son N. Tran, Tillman Weyde, Artur d'Avila Garcez
Results show that each of the three compared models outperforms the remaining two in one of the three datasets, thus indicating that the proposed theoretical generalisation of the DRBM may be valuable in practice.
no code implementations • 6 Nov 2014 • Siddharth Sigtia, Emmanouil Benetos, Nicolas Boulanger-Lewandowski, Tillman Weyde, Artur S. d'Avila Garcez, Simon Dixon
We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance.