Paper

SwG-former: A Sliding-Window Graph Convolutional Network for Simultaneous Spatial-Temporal Information Extraction in Sound Event Localization and Detection

Sound event localization and detection (SELD) involves sound event detection (SED) and direction of arrival (DoA) estimation tasks. SED mainly relies on temporal dependencies to distinguish different sound classes, while DoA estimation depends on spatial correlations to estimate source directions. This paper addresses the need to simultaneously extract spatial-temporal information in audio signals to improve SELD performance. A novel block, the sliding-window graph-former (SwG-former), is designed to learn temporal context information of sound events based on their spatial correlations. The SwG-former block transforms audio signals into a graph representation and constructs graph vertices to capture higher abstraction levels for spatial correlations. It uses different-sized sliding windows to adapt various sound event durations and aggregates temporal features with similar spatial information while incorporating multi-head self-attention (MHSA) to model global information. Furthermore, as the cornerstone of message passing, a robust Conv2dAgg function is proposed and embedded into the block to aggregate the features of neighbor vertices. As a result, a SwG-former model, which stacks the SwG-former blocks, demonstrates superior performance compared to recent advanced SELD models. The SwG-former block is also integrated into the event-independent network version 2 (EINV2), called SwG-EINV2, which surpasses the state-of-the-art (SOTA) methods under the same acoustic environment.

Results in Papers With Code
(↓ scroll down to see all results)