Dimension Mixer: A Generalized Method for Structured Sparsity in Deep Neural Networks

30 Nov 2023  ·  Suman Sapkota, Binod Bhattarai ·

The recent success of multiple neural architectures like CNNs, Transformers, and MLP-Mixers motivated us to look for similarities and differences between them. We found that these architectures can be interpreted through the lens of a general concept of dimension mixing. Research on coupling flows and the butterfly transform shows that partial and hierarchical signal mixing schemes are sufficient for efficient and expressive function approximation. In this work, we study group-wise sparse, non-linear, multi-layered and learnable mixing schemes of inputs and find that they are complementary to many standard neural architectures. Following our observations and drawing inspiration from the Fast Fourier Transform, we generalize Butterfly Structure to use non-linear mixer function allowing for MLP as mixing function called Butterfly MLP. We were also able to mix along sequence dimension for Transformer-based architectures called Butterfly Attention. Experiments on CIFAR and LRA datasets demonstrate that the proposed Non-Linear Butterfly Mixers are efficient and scale well when the host architectures are used as mixing function. Additionally, we propose Patch-Only MLP-Mixer for processing spatial 2D signals demonstrating a different dimension mixing strategy.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


Ranked #14 on Long-range modeling on LRA (Pathfinder-X metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Long-range modeling LRA Butterfly Attention ListOps 37.2 # 21
Text 65.4 # 20
Retrieval 81.51 # 17
Image 43.82 # 21
Pathfinder 71.23 # 24
Pathfinder-X 76.72 # 14

Methods