Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling

14 Nov 2019  ·  Daniel Stoller, Mi Tian, Sebastian Ewert, Simon Dixon ·

Convolutional neural networks (CNNs) with dilated filters such as the Wavenet or the Temporal Convolutional Network (TCN) have shown good results in a variety of sequence modelling tasks. However, efficiently modelling long-term dependencies in these sequences is still challenging. Although the receptive field of these models grows exponentially with the number of layers, computing the convolutions over very long sequences of features in each layer is time and memory-intensive, prohibiting the use of longer receptive fields in practice. To increase efficiency, we make use of the "slow feature" hypothesis stating that many features of interest are slowly varying over time. For this, we use a U-Net architecture that computes features at multiple time-scales and adapt it to our auto-regressive scenario by making convolutions causal. We apply our model ("Seq-U-Net") to a variety of tasks including language and audio generation. In comparison to TCN and Wavenet, our network consistently saves memory and computation time, with speed-ups for training and inference of over 4x in the audio generation experiment in particular, while achieving a comparable performance in all tasks.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Music Modeling JSB Chorales Seq-U-Net NLL 8.173 # 9
Parameters 522K # 1
Music Modeling JSB Chorales TCN NLL 8.154 # 8
Parameters 534K # 1
Music Modeling Nottingham TCN NLL 2.783 # 2
Parameters 1.7M # 1
Music Modeling Nottingham Seq-U-Net NLL 2.97 # 3
Parameters 1.7M # 1
Language Modelling Penn Treebank (Character Level) Seq-U-Net Bit per Character (BPC) 1.3 # 16
Number of params 5.9M # 13
Language Modelling Penn Treebank (Character Level) TCN Bit per Character (BPC) 1.31 # 18
Number of params 5.9M # 13
Language Modelling Penn Treebank (Word Level) Seq-U-Net Test perplexity 107.95 # 42
Params 14.9M # 29
Language Modelling Penn Treebank (Word Level) TCN Test perplexity 108.47 # 43
Params 14.7M # 30

Methods