Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Constantin Jehn

and 4 more

Understanding speech in the presence of background noise such as other speech streams is a difficult problem for people with hearing impairment, and in particular for users of cochlear implants (CIs). To improve their listening experience, auditory attention decoding (AAD) aims to decode the target speaker of a listener from electroencephalography (EEG), and then use this information to steer an auditory prosthesis towards this speech signal. In normal-hearing individuals, deep neural networks (DNNs) have been shown to improve AAD compared to simpler linear models. AAD has also been shown to be feasible in CI users using linear models, however, it has not yet been shown that DNNs can yield enhanced decoding accuracies for this patient group. Here we show that attention decoding in CI users can be significantly improved through the usage of a convolutional neural network (CNN). To this end, we first collected an EEG dataset on selective auditory attention from 25 CI users, and then implemented both a linear model as well as a CNN for attention decoding. We observed superior performance of the CNN across all considered decision window sizes, ranging from 1 s to 60 s. Boosted by a Support Vector Machine (SVM) as a trainable classifier, the CNN decoder achieved a maximal mean decoding accuracy of 74% at the population level for a decision window of 60 s duration. Our findings illustrate that the progress made in AAD among normal hearing participants, facilitated by the integration of DNNs, extends to cochlear implant (CI) users.