Options
Audio assisted EEG segmentation for training of Imagined speech classification model
Journal
International Conference on Electrical, Computer, and Energy Technologies, ICECET 2021
Date Issued
2021-01-01
Author(s)
Varshney, Yash Vardhan
Khan, Azizuddin
Abstract
Imagined speech segmentation is one of the major challenges for training a machine learning model. This study proposes a method of EEG segmentation for imagined speech events, which is further used to train machine learning models for EEG decoding. An experiment was performed where EEG data were collected from four subjects performing imagined as well as overt speech tasks for six words. The audio signals were recorded too during the overt speech task for extracting the onset/offset time during the utterance of the presented word. An EEG template was created using onset/offset timestamps obtained from audio annotation. The EEG signal during the imagined speech task was segmented from continuously recorded data using template matching. Features from delta, theta, alpha, beta, gamma, and high gamma frequency bands were extracted from the extracted segments and used for the training of the classifiers. The proposed method of data segmentation shows an improvement of 9.38% in classification accuracy over the existing method.
Subjects