Decoding of Selective Attention to Speech From Ear-EEG Recordings

10 Jan 2024  ·  Mike Thornton, Danilo Mandic, Tobias Reichenbach ·

Many people with hearing loss struggle to comprehend speech in crowded auditory scenes, even when they are using hearing aids. Future hearing technologies which can identify the focus of a listener's auditory attention, and selectively amplify that sound alone, could improve the experience that this patient group has with their hearing aids. In this work, we present the results of our experiments with an ultra-wearable in-ear electroencephalography (EEG) monitoring device. Participants listened to two competing speakers in an auditory attention experiment whilst their EEG was recorded. We show that typical neural responses to the speech envelope, as well as its onsets, can be recovered from such a device, and that the morphology of the recorded responses is indeed modulated by selective attention to speech. Features of the attended and ignored speech stream can also be reconstructed from the EEG recordings, with the reconstruction quality serving as a marker of selective auditory attention. Using the stimulus-reconstruction method, we show that with this device auditory attention can be decoded from short segments of EEG recordings which are of just a few seconds in duration. The results provide further evidence that ear-EEG systems offer good prospects for wearable auditory monitoring as well as future cognitively-steered hearing aids.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods