Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities

2 Feb 2024  ยท  Zhifeng Kong, Arushi Goel, Rohan Badlani, Wei Ping, Rafael Valle, Bryan Catanzaro ยท

Augmenting large language models (LLMs) to understand audio -- including non-speech sounds and non-verbal speech -- is critically important for diverse real-world applications of LLMs. In this paper, we propose Audio Flamingo, a novel audio language model with 1) strong audio understanding abilities, 2) the ability to quickly adapt to unseen tasks via in-context learning and retrieval, and 3) strong multi-turn dialogue abilities. We introduce a series of training techniques, architecture design, and data strategies to enhance our model with these abilities. Extensive evaluations across various audio understanding tasks confirm the efficacy of our method, setting new state-of-the-art benchmarks. Our demo website is https://audioflamingo.github.io/ and the code is open-sourced at https://github.com/NVIDIA/audio-flamingo.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Retrieval-augmented Few-shot In-context Audio Captioning AudioCaps Audio Flamingo (4-shot) CIDEr 0.518 # 1
Zero-shot Audio Captioning AudioCaps Audio Flamingo BLEU-4 14.3 # 1
METEOR 20.5 # 1
ROUGE-L 40.8 # 1
CIDEr 50.2 # 1
SPICE 15.1 # 1
SPIDEr 32.6 # 1
Audio captioning Clotho Audio Flamingo (Pengi trainset) CIDEr 0.489 # 2
SPIDEr 0.312 # 2
SPICE 0.134 # 3
BLEU-4 17.4 # 2
METEOR 18.7 # 2
ROUGE-L 39.4 # 2
Acoustic Scene Classification CochlScene Audio Flamingo 1:1 Accuracy 0.830 # 1

Methods


No methods listed for this paper. Add relevant methods here