Classification of motor imagery tasks in brain-computer interface using ensemble learning
Subasi, Abdulhamit ; Mian Qaisar, Saeed
Subasi, Abdulhamit
Mian Qaisar, Saeed
Citations
Altmetric:
Type
Supervisor
Date
2025-02
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Brain–computer interfaces (BCIs) utilize brain activity instead of regular neuromuscular channels to facilitate environmental interaction. Because of this, BCIs offer a promising communication manner that enables users with disabilities to operate smart home systems and gadgets. It has been demonstrated that motor abilities can be improved, and rehabilitation from movement disorders can benefit from motor imagery (MI), which is the mental practice of movements. MI training can be more effective with BCIs providing real-time feedback. Those with physical disabilities can live much better thanks to Human Machine Interactions (HMIs) enabled by BCIs, which will allow them to do things like grab objects, turn on lights, and change fan speed using just their brain impulses. Machine learning is essential to recognize and convert intentions in brain signals into control commands for assistive devices. To help bring BCI-based assistive technology to reality, this chapter focuses primarily on brain signals collected from the scalp using electroencephalogram (EEG). For noise reduction, it employs multiscale principal component analysis (MSPCA). Feature extraction is done using wavelet packet decomposition (WPD). Subsequently, subband statistical analysis is carried out to reduce dimensions. The ensemble learning-based classifiers then process the prepared feature set to identify the MI tasks.
Department
Publisher
Sponsor
Copyright
Book title
Artificial Intelligence Applications for Brain–Computer Interfaces