Classification of motor imagery tasks in brain-computer interface using ensemble learning
Subject
Motor imagery tasksBrain–computer interface
Signal processing
Artificial Intelligence
Ensemble learning
Machine learning
Multiscale principal component analysis (MSPCA)
Wavelet packet decomposition (WPD)
Date
2025-02
Metadata
Show full item recordAbstract
Brain–computer interfaces (BCIs) utilize brain activity instead of regular neuromuscular channels to facilitate environmental interaction. Because of this, BCIs offer a promising communication manner that enables users with disabilities to operate smart home systems and gadgets. It has been demonstrated that motor abilities can be improved, and rehabilitation from movement disorders can benefit from motor imagery (MI), which is the mental practice of movements. MI training can be more effective with BCIs providing real-time feedback. Those with physical disabilities can live much better thanks to Human Machine Interactions (HMIs) enabled by BCIs, which will allow them to do things like grab objects, turn on lights, and change fan speed using just their brain impulses. Machine learning is essential to recognize and convert intentions in brain signals into control commands for assistive devices. To help bring BCI-based assistive technology to reality, this chapter focuses primarily on brain signals collected from the scalp using electroencephalogram (EEG). For noise reduction, it employs multiscale principal component analysis (MSPCA). Feature extraction is done using wavelet packet decomposition (WPD). Subsequently, subband statistical analysis is carried out to reduce dimensions. The ensemble learning-based classifiers then process the prepared feature set to identify the MI tasks.Department
Electrical and Computer EngineeringPublisher
ElsevierBook title
Artificial Intelligence Applications for Brain–Computer Interfacesae974a485f413a2113503eed53cd6c53
https://doi.org/10.1016/B978-0-443-33414-6.00006-X