Electroencephalography (EEG)-based motor imagery (MI) decoding has established a novel experimental paradigm in brain-computer interface (BCI) applications that offer effective treatment for stroke paralyzed patients. However, existing MI-EEG-based BCI systems introduce deployment issues because of nonstationary EEG signals, suboptimal features, and limited multi-class scalability. To tackle these issues, we propose an enhanced sparse swarm decomposition method (ESSDM) based on selfish-herd optimization and sparse spectrum to solve the issue of choice of uniform decomposition and hyper-parameters in swarm decomposition and applied to enhance MI-EEG classification. ESSDM adopts improved swarm filtering to automatically deliver optimal frequency bands in sparse spectrums with optimized hyper-parameters to extract dominant oscillatory components (OCs) that significantly enhance MI activation-related sub-bands. In addition, new fitness criteria is designed based on the Kullback–Leibler divergence distance from spectral kurtosis of obtained modes to select hyper-parameters that optimize decomposition effect, avoid excessive iterations, and provide fast convergence with optimal modes. Further, fused time-frequency graph (FTFG) features were derived from computed time-frequency representation to find cross-channel mutual spectral information. The experimental results on the 2-class BCI III-4a and 4-class BCI IV-2a datasets reveal that the proposed FTFG feature with CapsNet classifier framework (ESSDM-FTFG-CapsNet) outperformed existing methods in specific-subject and cross-subject scenarios.
In brain-computer interface (BCI) applications, imagined speech (IMS) decoding based on electroencephalography (EEG) has established a new neuro-paradigm that offers an intuitive communication tool for physically impaired patients. However, existing IMS-EEG-based BCI systems have introduced difficulties in feasible deployment due to nonstationary EEG signals, suboptimal feature extraction, and constrained multi-class scalability. To address these challenges, we have presented a novel approach using the multivariate swarm-sparse decomposition method (MSSDM) for joint time-frequency (JTF) analysis and further developed a feasible end-to-end framework from multichannel IMS-EEG signals for imagined speech detection. MSSDM employs improved multivariate swarm filtering and sparse spectrum techniques to design optimal filter banks for extracting an ensemble of channel-aligned oscillatory components (CAOCs), significantly enhancing IMS activation-related sub-bands. To enhance channelaligned information, multivariate JTF images have been constructed using joint instantaneous frequency and instantaneous amplitude across channels from the obtained CAOCs. Further, JTFbased deep features (JTFDF) were computed using different pretrained neural networks and mapped most discriminant features using two well-known feature correlation techniques: Canonical correlation analysis and Hellinger distance-based correlation. The proposed method has been tested on the 5-class BCI Competition DB and 6-class Coretto DB IMS datasets. The experimental findings on cross-subject reveal that the novel JTFDF feature-based classification model, MSSDM-SqueezeNet-JTFDF, achieved the highest classification performance against all other existing state-of-theart methods in imagined speech recognition.
In visual object decoding, magnetoencephalogram (MEG) and electroencephalogram (EEG) activation patterns demonstrate the utmost discriminative cognitive analysis due to their multivariate oscillatory nature. However, high noise in the recorded EEG-MEG signals and subject-specific variability make it extremely difficult to classify subject’s cognitive responses to different visual stimuli. The proposed method is a multivariate extension of the swarm-sparse decomposition method (MSSDM) for multivariate pattern analysis of EEG-MEG-based visual activation signals. In comparison, it is an advanced technique for decomposing non-stationary multi-component signals into a finite number of channel-aligned oscillatory components that significantly enhance visual activation-related sub-bands. The MSSDM method adopts multivariate swarm filtering and sparse spectrum to automatically deliver optimal frequency bands in channel-specific sparse spectrums, resulting in improved filter banks. By combining the advantages of the multivariate SSDM and Riemann’s correlation-assisted fusion feature (RCFF), the MSSDM-RCFF algorithm is investigated to improve the visual object recognition ability of EEG-MEG signals. A proposed MSSDM is evaluated on multivariate synthetic signals and multivariate EEG-MEG signals using five classifiers. The proposed fusion feature and linear discriminant analysis classifier-based framework (MSSDM-FF-LDA) outperformed all existing state-of-the-art methods used for visual object detection and achieved the highest accuracy of 81.82% using 10-fold cross-validation on EEG-MEG multichannel signals.