Analysis of artifacts in EEG signals for building BCIs
AAnalysis of artifacts in EEG signals for building BCIs
A THESISsubmitted by
SRIHARI MARUTHACHALAM for the award of the degreeof
MASTER OF SCIENCE (by Research)
DEPARTMENT OF COMPUTER SCIENCE ANDENGINEERINGINDIAN INSTITUTE OF TECHNOLOGY MADRAS.February 2020 a r X i v : . [ ee ss . SP ] S e p HESIS CERTIFICATE
This is to certify that the thesis titled
Analysis of artifacts in EEG signals for buildingBCIs , submitted by
Srihari Maruthachalam (CS16S024), to the Indian Institute ofTechnology Madras, for the award of the degree of
Master of Science (by Research) ,is a bona fide record of the research work done by him under my supervision. Thecontents of this thesis, in full or in parts, have not been submitted to any other Instituteor University for the award of any degree or diploma.
Prof. Hema A Murthy
Research GuideProfessorDepartment of Computer Science and EngineeringIndian Institute of Technology, MadrasChennai, 600036Place: ChennaiDate:
CKNOWLEDGEMENTS
Firstly, I would like to thank my advisor Prof. Hema A Murthy, for trusting me and myability to pursue research studies. She is one of the strongest, inspiring, and intellectualwomen, who consistently support me throughout my journey in Indian Institue of Tech-nology Madras. She never missed to amuse me with her rational ideas, problem-solvingability, and immense writing skills. She always motivates and kept me on track. Thisthesis is possible because of her generous support. I am indeed fortunate to work withher.I want to extend my gratitude to Prof. Mriganka Sur, Massachusetts Institute ofTechnology, for his wise ideas, suggestions, and support. With his guidance, my re-search studies in Neuroscience gained potential. It is my pleasure and honor to workwith him. I thank Prof. Sriram Ganapathy, Indian Institute of Science, Bangalore, forideating the artifacts classification work. He is down to earth and retains the energy toimprovise the solution with his intellectual views. I thank him again.I thank all my General Test Committee members, Prof. N.S. Narayanaswamy, Prof.V. Srinivasa Chakravarthy, and Prof. Mitesh Khapra. I extend my gratitude to all thefaculty of the Department of Computer Science and Engineering for their enormousexpertise, guidance, and assistance during my research studies.I want to thank my friends and colleagues of the Speech and Music Technology(SMT) Lab. Special thanks to Mari Ganesh Kumar, Siddharth Agarwal, Rini Sharon,Karthik Pandia, and Saranya M S for their presence and support during my researchstudies. I extend my gratitude to my friends, Nauman Dawalatabad, Jom Kuriacose,Jeena J Prakash, Jilt Sebastian, P V Krishnaraj Sekhar, Arun Baby, Anju Leela Thomas,Anusha Prakash, Gayathri, Mahesh, Ashish Mishra, Vinoth Kumar, and Manish Jain.Last but not least, I want to thank my family for their care and love. My eternalrespect and love to my father, Maruthachalam, to my mother Kalavathy, and to my littlesister, Abinayaa, for their support during my research studies.i
BSTRACT
KEYWORDS: Brain-Computer Interface, Electroencephalography, Text-To-Speech synthesis, Artifacts, Time warping techniquesBrain-Computer Interface (BCI) is an essential mechanism that interprets the humanbrain signal. It provides an assistive technology that enables persons with motor dis-abilities to communicate with the world and also empowers them to lead independentlives. The common BCI devices use Electroencephalography (EEG) electrical activityrecorded from the scalp. EEG signals are noisy owing to the presence of many artifacts,namely, eye blink, head movement, and jaw movement. Such artifacts corrupt the EEGsignal and make EEG analysis challenging. This issue is addressed by locating the ar-tifacts and excluding the EEG segment from the analysis, which could lead to a loss ofuseful information. However, we propose a practical BCI that uses the artifacts whichhas a low signal to noise ratio.The objective of our work is to classify different types of artifacts, namely eye blink,head nod, head turn, and jaw movements in the EEG signal. The occurrence of theartifacts is first located in the EEG signal. The located artifacts are then classified usinglinear time and dynamic time warping techniques. The located artifacts can be used bya person with a motor disability to control a smartphone. A speech synthesis applicationthat uses eyeblinks in a single channel EEG system and jaw clinches in four channelsEEG system are developed. Word prediction models are used for word completion, thusreducing the number of artifacts required. iii
ABLE OF CONTENTS
ACKNOWLEDGEMENTS iABSTRACT iiiLIST OF TABLES ixLIST OF FIGURES xiABBREVIATIONS xiiiNOTATION xv1 Introduction 1
Brain-Computer Interface using artifacts signatures in Electroencephalo-gram 45
IST OF TABLES
IST OF FIGURES
BBREVIATIONS
ALS
Amyotrophic Lateral Sclerosis
BCI
Brain-Computer Interface
BSS
Blind Source Separation
CCA
Canonical Correlation Analysis
CSF
Cerebrospinal Fluid CT Computed Tomography DP Dynamic Programming
DTW
Dynamic Time Warping
ECG
Electrocardiogram
ECoG
Electrocorticography
EEG
Electroencephalography
EMG
Electromyography
EOG
Electrooculography
ERD
Event Related Desynchronization
ERP
Event-Related Potentials
ERS
Event Related Synchronization
FMRI
Functional Magnetic Resonance Imaging
HCI
Human-Machine Interface
HMM
Hidden Markov Model Hz Hertz
ICA
Independent Component Analysis
IFCN
International Federation of Clinical Neurophysiology
LTW
Linear Time Warping
MEG
Magnetoencephalography
MRI
Magnetic Resonance Imaging ms Milliseconds
PCA
Principal Component Analysis
PET
Positron Emission Tomographyxiii NN Recurrent Neural Network
SMR
Sensorimotor Rhythms
SNR
Signal to Noise Ratio
SSVEP
Steady-State Visually Evoked Potentials
TTS
Text To Speech xiv
OTATION δ Delta wave in electroencephalogram signal θ Theta wave in electroencephalogram signal α Alpha wave in electroencephalogram signal β Beta wave in electroencephalogram signal γ Gamma wave in electroencephalogram signal Ω Impedances of electroencephalography electrodes η The threshold used to detect artifact in EEG σ The standard deviation of the artifact signal M n n th sample in moving average of the electroencephalogram signalxv HAPTER 1Introduction
Assistive technology enables disabled people to communicate and function more orless normally. It could be a device or software. Further, assistive technology lessens theworkload of caregivers. Assistive technology aids people from isolation, exclusion, andbeing locked-in; also, it helps in diminishing the influence of disease, limitation, anddisability on a person, their family, and the community. One of the variants of assistivetechnologies is brain-computer interface (BCI), where neural electrical signals from thehuman brain are captured and interpreted as commands to control real-world devices.In this thesis, we propose and develop an innovative artifact-based BCI.
The human brain is the principal organ of the human nervous system. It manages mostof the activities of the human body, namely, receiving, processing, integrating, and co-ordinating the information from the senses. Moreover, it plays a crucial role in decisionmaking and sends instructions to the body. The brain is enclosed in, and shielded by,the skull of the head.In recent times, a family of imaging techniques are being applied for examiningbrain functions of humans. These techniques include positron emission tomography(PET), magnetic resonance imaging (MRI), and computed tomography (CT). The mea-surements obtained from these techniques can produce exceptional spatial resolution oftwo or three-dimensional human brain images. On the other hand, electroencephalogra-phy (EEG), magnetoencephalography (MEG), and electrocorticography (ECoG) havean excellent temporal resolution for data processing compared with any imaging tech-niques.Among the mentioned brain signal sensing techniques, EEG has a comparativelycheaper for recording. EEG is a non-invasive technique to record the electrical activityf the human brain from the scalp. EEG measures potential difference resulting fromionic current flow among the neurons of the brain. In the year 1929, Hans Berger mea-sured the electrical activity over the human scalp and coined the term electroencephalo-gram for representing human brain electric potentials. It was also inferred that the EEGsignal fluctuates over time and is related to prevailing cognitive states of subjects. Overthe last few decades, researchers have actively studied the relationship between cogni-tive processes and the EEG signal.EEG electrodes measure the relative electric potentials directly on the human scalp.When the brain nerve cells are activated, ionic potentials are produced. The electrodesplaced over the scalp convert these ionic potentials into electrical potentials, which canbe measured and stored. Since EEG is collected directly from the human scalp surface,the method can be applied frequently to healthy adults, children, and patients withoutrisk and limitation.Since the EEG signal contains significant information corresponding to variousphysiological states of the brain, it is a commonly used tool for observing various neu-rological disorders.
A brain-computer interface (BCI) is a communication technique between an individualand external devices where the brain signals that are captured are translated. BCI isalso referred to as a brain-machine interface (BMI) or direct neural interface (DNI) ora neural-control interface (NCI), a mind-machine interface (MMI).Currently, there are only a limited number of interfaces even for simple tasks, pri-marily because brain signals measured across sessions, across subjects are noisy andmost challenging to interpret. On the other hand, standard human-machine Interfaces(HCI) like the touchpad, buttons, gesture, and voice recognition are more robust. Nev-ertheless, BCIs are often propitious to the people, who lost control over most of themuscles, that otherwise could have been used as a medium of communication. Forthe people who are suffering due to total or partial locked-in syndrome, BCIs allowcommunicating with their surroundings. BCIs also improve a person’s liberation andconfidence. 2 .3 Artifacts
Typically, the recorded neural EEG signals are in the range of microvolts, and it canbe concealed by potentials generated from the non-cerebral source, which are calledartifacts. So far, clinical applications and academic studies consider the presence ofartifacts as a constant problem during the recording of brain activity. Among all thetypes of artifacts, electromyography (EMG) and electrooculography (EOG) artifactsare two notable contributors to physiological artifacts.The most popular method for ocular artifact removal is based on linear combina-tion and regression. In clinical and academic practice, the data affected with artifactswould be discarded in most situations, while it will lead to a significant informationloss. There exists a variety of techniques for artifact separation and removal. By usinga simple filtering technique, such as bandpass filters, the separation could be achievedin the frequency domain. Methods like blind source separation (BSS) [Joyce et al. ,2004], canonical correlation analysis (CCA) [Lin et al. , 2018], and independent com-ponent analysis (ICA) [Delorme et al. , 2007], [LeVan et al. , 2006], [Castellanos andMakarov, 2006], [Wang et al. , 2015], [Tong et al. , 2001] are currently employed insignal processing for eliminating the artifact.Minimal muscular movements like eye blink or jaw movement or head movementproduce huge deflections in the EEG signal. Building a artifacts detection and classifi-cation model can help to augment a BCI. For example, associating the set of eyeblinkpatterns with commands or responses to the environment enables the people with partiallocked-in syndrome to communicate.
The objective of this thesis is three-fold. • The first objective is to analyze the artifacts generated from voluntary minimalmotions of subjects in a controlled setup and study the properties and structure ofthe artifacts. • The second objective is to build a detection and classification model to classify3he type of artifacts. It is achievable with the knowledge obtained from the firstobjective. • The third and final objective is to map the artifacts with the speller to enablereal-world communication.
The body of the thesis is explained in this section. This chapter provides an introductionto the area of brain-computer interface (BCI). Chapter 2 conveys a brief outline ofthe background material in terms of neuroscience, artifacts, Electroencephalography(EEG) devices, and existing techniques in BCI, which serve as background knowledgefor Chapters 3 and 4. Chapter 3 proposes an efficient way to detect and classify theartifacts in EEG. Chapter 4 proposes and elaborates the working of a BCI using singleand four electrodes mobile EEG devices. Chapter 5 concludes the thesis.4
HAPTER 2Background Material2.1 Introduction
Design and development of brain computer interfaces requires that we have a goodunderstanding of the brain and its functions. Using EEG signals for building BCIsrequires an understanding of the EEG signal too. A good understanding of EEG signalsis required since the signals are weak, and are quite noisy. Further, different lobesin the brain are responsible for different activities. It is equally important to studyand understand the different time warping techniques, such as linear and dynamic timewarping to classify the artifacts. In this thesis, we investigate brain electrical activityand artifacts from multiple electroencephalogram (EEG) recording devices for buildingan artifact-based BCI.The rest of the chapter is organized as follows. Sections 2.2 and 2.3 provides anoverview of the human brain and its anatomy, respectively. Section 2.4 describes thevarious frequency bands in electroencephalography (EEG). Sections 2.5, 2.6, and 2.7elaborates the artifacts, its types, and proposed techniques to classify the artifacts re-spectively. Section 2.8 outlines the multiple electroencephalogram devices employedin the thesis. Sections 2.9 and 2.10 discuss about existing BCI techniques and its appli-cations respectively. The proposed BCI model is reported in Section 2.11.
The human brain is the principal organ of the human nervous system. It contains avolume of nerve tissues. The spinal cord makes up the central nervous system in theanterior end of the human body. The brain is also the center of learning, interpretation,comprehension, thinking, and language processing.The brain contains the cerebellum, the brainstem, and the cerebrum. It managesmost of the actions of the human body by integrating, processing, and organizing thenformation it collects from various sense organs and making logical decisions. Thebrain is included in and shielded with the skull bones of the head. The skull protectsthe brain, suspended in the cerebrospinal fluid, also called CSF. It is secluded from thebloodstream through the blood-brain barrier. Nevertheless, the brain is still sensitiveto disease, damage, and infection. Damage can be induced by trauma or an injury ora lack of blood supply, referred to as a stroke. The brain is sensitive to degenerativediseases, such as Parkinson’s disease, dementia, which includes Alzheimer’s disease,and multiple sclerosis. Psychiatric medical conditions, including schizophrenia andclinical despair, are thought to be affiliated with brain dysfunctions. The pathologicalhistory of people with a brain impairment has given much insight within the purposeand capacity of every section of the brain. Moreover, brain study has evolved andgrown with thoughtful philosophical, empirical, innovative experimental and theoreticalaspects.Many emerging techniques are employed to investigate and examine the brain. Med-ical imaging techniques such as functional neuroimaging or functional Magnetic Reso-nance Imaging (fMRI), computed tomography (CT), and positron emission tomography(PET). Imaging techniques are robust at spacial resolution. Medical monitoring tech-niques such as electroencephalography (EEG) and electrocorticography (ECoG) havea better temporal resolution. To build a practical BCI, the temporal resolution is morecritical. These imaging and monitoring techniques have helped analyze how differentparts of the human brain function to a certain extent. This understanding helps discrim-inate regions of the brain as specific lobes that can be associated with specific cognitivefunctions. In the following subsections, various parts of the human brain are discussedbriefly.
The major part of the human brain is the telencephalon, which can be split into lobes.Respecting the anatomical distribution and various brain purposes, the cerebrum con-sists of six lobes of the brain [Ribas, 2010]. There are four significant lobes of thecerebral cortex in the human brain, and they are frontal, parietal, temporal, and occipi-tal lobes, as shown in the Figure 2.1. Their locations and functions are briefly discussed6elow, Figure 2.1: Lobes of the human brain
The area at the front of the cerebral hemisphere is the frontal lobe, usually includesdopamine-delicate neurons, and it is responsible for short-time memory tasks, attention,motivation, and planning.
The parietal lobe is positioned over the occipital lobe and next to the frontal lobe. Thesensory information from various modalities is aggregated in this lobe, including spatialsense and tactile sense of the skin. Besides, some regions in the parietal lobe perform acrucial role in language processing.
The region underneath the lateral gap of cerebral hemispheres is the temporal lobe. Itis incorporated with visual remembrances, emotion association, and language compre-7ension.
The occipital lobe is the visual processing core of the brain. It is also responsible fordiverse tasks, such as motion perception, visuospatial processing, and color differentia-tion.
A continuous recording of the electrical activity of the brain for brain scientists andclinical specialists uses two fundamental parameters, namely amplitude, and frequency,which describe the EEG signal. Some EEG patterns are considered reliable for visualexamination. In general, there are five standard brain waves classified by different fre-quency bands, and these rhythms are named by Greek letters individually as δ (delta), θ (theta), α (alpha), β (beta) and γ (gamma). Berger discovered the alpha and the betawaves in the year 1929 [Millett, 2001]. Jasper and Andrews coined the gamma wavein the year 1938 [Jasper and Andrews, 1938]. Delta and theta waves were introducedby [Walter, 1953]. Delta activity belongs to EEG activity in the range of 0.5-3 Hz. Itis often affiliated with EEG synchronized sleep in people. In the first few years of ahuman infant’s life, the dominant frequency is the delta wave. Theta activity can be ob-served by a low-frequency range of 3-8 Hz. [Schacter, 1977] shows that the theta waveis related to two psychological phenomena. The first one is a low level of attentivenessand sleep loss states. The second one is in perceptual processing and problem-solving.The theta band is also responsible for active and effective processing. Alpha activityhappens across the posterior areas of the human head and happens when the individualsare awake and relaxed. The alpha wave consists of comparatively high voltage, whichusually is smaller than 50 µ V , across the occipital regions in the range of 8 to 13 Hz.The presence of alpha activity is associated with physical relaxation with closed eyes[Schomer and Da Silva, 2017]. The general conscious rhythm of the brain is the betawave, and it corresponds to active attention, active thinking, and solving problems inhealthy adults. Usually, the beta wave has low-voltage changes with a range of 13 to 30Hz. A higher frequency spectrum from 30 to 70 Hz or more with lower voltage changes8s the gamma wave. As per the International Federation of Clinical Neurophysiology (IFCN), an artifact isdescribed as any potential difference due to an extra-cerebral origin, which is shown inthe EEG signal [Kane et al. , 2017]. The impact on the EEG signals by artifact is seenas a well-recognized challenge for experimental and clinical electroencephalography.For a long period of time, the artifact in EEG activity was regarded as an obscura-tion [Klass, 1995]. Sometimes, artifacts may have similar parameters in frequency,rhythmicity, and recurrence compared to the recorded electric potentials of the cere-bral source. So it would become considerably challenging to figure out the contrastbetween artefactual and cerebral electrical activity [Brittenham, 1974]. Because theEEG signals are always in the range of milli to microvolts, they can be easily maskedby artifacts. Typically, based on their source, the artifact can be classified into twocategories, physiological and non-physiological artifacts. The origins of physiologicalartifacts are the non-neural movements of the subjects, such as muscle activities or eyemovement. While the non-neural artifact originates from outside of the human body,such as environment or equipment, improper attachment of the electrodes in a con-trolled environment is one of the common reasons for this technical artifact [Anderer et al. , 1999]. In the brain-computer interface (BCI) study, electromyography (EMG)and electrooculography (EOG) artifacts are the most commonly identified sources ofphysiological artifacts.Numerous research works have reported that EMG and EOG activities could in-fluence the neurological aspects involved in a BCI system. For example, early target-related EMG based artifact is manifested during the initial stages of BCI training and isdiscussed in [McFarland et al. , 2005].
In this section, some popular types of artifacts measurement interfaces are summarizedand argued why EEG is adequate to measure any artifact.9 .6.1 Electromyogram (EMG)
Muscle tissue contraction can move various parts of the human body. These musclesare classified as smooth, skeletal, and cardiac muscles. Electromyogram (EMG) ex-hibits the electrical activities of skeletal muscles, where the action potentials originatebetween the muscles and the nervous systems. Generally, a noninvasive electrode orneedle electrode could be employed to measure myoelectric signals. In research stud-ies, with the aid of surface electrodes on the skin, the surface EMG is examined. Therecording of surface EMG could be affected by various types of noise, such as elec-trode motion artifact, which may limit signal quality [Sörnmo and Laguna, 2005]. Cra-nial EMG has numerous characteristics that are accountable for the harmful effects onthe EEG activity. First, it was reported that the EMG has a wide frequency range,which is from 0 to 200 Hz [Goncharova et al. , 2003], which indicates that EMG activ-ity influences all the EEG bands, including delta, alpha, and beta bands. The EMG istopographically distributed in the whole human body. So when the energy of musclecontraction raises, EMG influences and affects the whole scalp.
Electroculogram is the electrical activity produced by eye movement, which has a sig-nificant effect on EEG recording. The potential difference between the cornea andretina can be altered by eye movement, which exists not only in the awake state butalso during sleep. The strength of EOG mostly relies on the electrode close to the eyesand the direction of eye movement (horizontal or vertical). Besides, the potential dif-ference could be influenced by blinking, which is generated by the muscle movementof the eyelid. This type of ocular activity produces a different waveform, and it mayonly occur during the awake periods. The blinking artifact has a high frequency, and theamplitude is significantly more in the frontal electrodes. For artifact processing, EOGsignals can be measured by using reference electrodes placed near the eye in practice.As a common type of artifact, EOG can present severe complications in EEG analysisdue to the proximity to the brain [Sörnmo and Laguna, 2005].10 .6.3 Electrocardiogram (ECG)
The electrical activity of the heart can be measured by electrocardiogram (ECG). Regu-lar heartbeats can be described by a repetitive, commonly occurring wave pattern, whichis suitable to expose the appearance of the ECG artifact. The amplitude of cardiac ac-tivity is weaker on the scalp compared to the EEG signals on the scalp. However, ifECG is visible in the background EEG signals, the cardio artifact may be overlookedas brain activity. Like the eye-related artifacts, the ECG can be measured separately byplacing several reference electrodes alongside cerebral activity.
Being an uncontrolled variation in the experimental settings, artifacts can be introducedat the time of data acquisition, primarily the equipment that connects the subjects andEEG instruments. The common artifacts are the electrode, the electrode-scalp interface,the jack plug, and the input cable. These types of experimental artifacts are nearly im-possible to avoid or reduce. One potential cause of artifact is the electrode cable, whichconnects the electrode with the data acquisition equipment. Owing to the inadequateshielding in practice, the electric current running from nearby powerlines or electricalappliances produce electromagnetic fields.Consequently, 50/60 Hz powerline interference modifies the EEG signals. Themovement of electrodes can modify the DC connection potential and provide an ar-tifact called an electrode-pop artifact. This technical artifact may happen not only inthe EEG signals but also in any bioelectric signals recorded on the body surface. Theelectrode-pop artifact has frequently acted as an abrupt shift in the baseline level. Amassive movement by subject could change the location of the electrode on the scalp.The changes in distance between the skin and the recording electrode result in signaldistortion. Moreover, the conduction capacity among the electrodes would be alteredby the movement of the recording tools with respect to the underlying skin [Sweeney et al. , 2010]. It was explained that the artifact produced by the electrode-scalp interfacehugely depends upon the skin condition of subject, as well as the kind of conductive gelapplied. 11 .6.5 EEG for artifacts
The muscle tissue contraction based artifacts are measured with EMG. However, [Gon-charova et al. , 2003] shown that the EEG is sufficient to capture the muscle tissue con-traction based artifacts. EOG measures the electrical activity produced by eye move-ments. Nevertheless, EEG is capable of measuring the same [Wang et al. , 2015], [Zengand Song, 2014]. Similarly, ECG can be captured with the aid of EEG [Tong et al. ,2001]. Technological artifacts can be detected and eliminated from the EEG signal withsignal processing techniques [Tatum et al. , 2011]. It is evident that EEG is sufficient tomeasure most of the artifacts produced.
Simple time-warping techniques are used to detect the type of artifacts. The EEG sig-nal pattern corresponding to that of the artifact is quite strong and evident. As a firstattempt, we attempt to match the EEG signal potentials across different realizations ofthe artifact. As two different realizations of the artifact need not be of the same durationand amplitude, we employ different types of warping techniques.In order to compare two time-varying signals, the incorporation of elastic distancemeasures is more justified in contrast with lock-step distance measures. Elastic distancemeasures allow one-to-many or many-to-one or one-to-none point matching. This phe-nomenon makes it viable for elastic distance measures to warp in time and be morerobust in estimating the similarities and dissimilarities. Two of the crucial elastic dis-tance measures in existence are Linear Time Warping [Zone, 2017] and Dynamic TimeWarping techniques [Berndt and Clifford, 1994], which aid in finding the alignment be-tween two time series. These two techniques are discussed in the following subsections.
Linear Time warping (LTW) works with the underlying principle of linear interpola-tion. Given any two known points in the coordinates ( a , b ) and ( a , b ) , the linear in-terpolant is the straight line between the points. For any value a in the interval ( a , a ) ,12he corresponding value b can be estimated by, b = b (cid:18) − a − a a − a (cid:19) + b (cid:18) a − a a − a (cid:19) (2.1)Figure 2.2: Illustration of linear interpolation of sequencesLTW is a technique that is applied to determine the alignment between two temporalseries. The two temporal series are interpolated to a particular length, and the Euclideandistance between the series is computed.The estimated distance provides us similarities and dissimilarities between the twoseries. Consider two time series X and Y, of lengths n and m respectively, and let n > m , X = x , x , x , ..., x n Y = y , y , y , ..., y m The interpolation of the series in LTW can be performed in three different ways. First,the interpolation of the smaller sequence to that of the larger sequence, that is, making Y = y , y , y , ..., y n ; second, the interpolation of the larger sequence to that of thesmaller sequence, that is, making X = x , x , x , ..., x m , however, this interpolationtechnique is not recommended as there might be some information loss; third, the inter-polation of both of the series to a predefined length, say k, making X = x , x , x , ..., x k and Y = y , y , y , ..., y k . In the literature, first and third interpolation techniques arewidely used. 13nce the two series are interpolated to a fixed-length series as shown in the Fig-ure 2.2 and the Euclidean distance metric is employed to estimate the similarity betweenthe series. Dynamic Time warping (DTW) is an algorithm, applied to determine the alignment be-tween two temporal series (a template and a query). It was widely used in old speechrecognition applications. Speech recognition typically implies the translation of spokenutterances to textual words. In the DTW based speech recognition, audio informationis transformed into templates. The template is matched with every template by imple-menting some constraints. The most suitable match template perpetually has the leastdistance measure from the input query. Nowadays, the application of DTW is no moreconfined to speech recognition; as a matter of fact, it can be practiced with any temporaldata, which can be interpreted in a linear series. Originally proposed DTW algorithmhas quadratic space and time complexities and is explained in [Keogh and Pazzani,2001], [Senin, 2008], [Ratanamahatana and Keogh, 2004], [Salvador and Chan, 2007],[Berndt and Clifford, 1994] literature.As discussed earlier, DTW can be practiced on any temporal data which can bepresented in a linear series, which can vary in speed and time. To comprehend DTW,let us consider two time series X and Y of the length n and m respectively, X = x , x , x , ..., x n Y = y , y , y , ..., y m DTW adopts a dynamic programming strategy to obtain the alignment between the twotime series, which aligns the time series based on optimally minimized distance. Dy-namic Programming also called as DP, is a robust methodology that breaks the massiveproblem into smaller sub-problems. The output of smaller sub-problems is determinedand then aggregated to compute the solution of the original problem. Due to this charac-teristic, it is additionally appreciated as a divide and conquer strategy. That is, remem-bering the past output as the output of the sub-problem, which contributes to answeringthe original problem. It can be accomplished by employing a couple of different meth-14ds, and they are top-down and bottom-up. In the top-down method, the problem isresolved by dividing it down into smaller sub-problems. Sub-problems are resolvedindependently, and the outcome of every sub-problem is collected and stored. This pro-cess ultimately contributed to an overall resolution. At the same time, the bottom-upstrategy of DTW, where the outcome of sub-problem is applied to resolve the givenoriginal problem progressively. For illustration, in DTW, the sub-problems D(i, j) issolved first as given in Equation 2.4, and applied those progressively to compute thesolution for D(n, m).The initial step to estimate DTW alignment between two time series is to form ann-by-m cost matrix where every ( i th , j th ) element corresponds to distance estimatedbetween x i and y j . Distance can be measured in by applying various distance metrics,say, simple Manhattan difference d ( x i , y j ) = | x i − y i | , Euclidian distance or squareddistance d ( x i , y j ) = ( x i − y i ) , or any other distance metric functions. [Akila andChandra, 2013] comprises of the review of various distance functions which can beused in DTW. Based on aggregative distance for every path in the cost matrix, the best-suited match between time series can be obtained by using Euclidian measure, which isone of the most popular methods in distance metric computation. D ( i,
1) = D ( i − ,
1) + d ( i, (2.2) D (1 , j ) = D (1 , j −
1) + d (1 , j ) (2.3) D ( i, j ) = d ( i, j ) + min [ D ( i − , j ) , D ( i − , j − , D ( i, j − (2.4)Dynamic programming formulation can be asymmetric and symmetric as well. Inasymmetric formulation, one of the places around the diagonal, that is, D(i - 1, j) orD(i, j - 1) is skipped or provided higher weight. Nevertheless, Equation 2.4 can beconsidered as a symmetric formulation, considering both points around the diagonal ofthe current point, are given the same weights.Research analyses report that symmetric formulation proffers more stable returnsas contrasted to asymmetric in speech recognition [Berndt and Clifford, 1994]. A DPformulation, for illustration, Equation 2.4, in the given context, provides the aggregative15easure for every point by considering the sum of distance measure with the least ofaggregative measures of the neighboring diagonal points. It supplies the global costmatrix D with filling the first row and first column of the matrix, in the following manneras given in Equations 2.2, 2.3, and 2.4 by initializing D(0,0) = 0Once the global cost matrix has been filled with aggregated distances, the followingstep is obtaining the warping path between the two time series through the cost matrix.Variants of DTW are proposed in the literature. The pros and cons of these variants arediscussed in Section 3.8.An optimum path consists of a series of continuous matrix cells passing through thecumulative cost matrix, which defines the alignment between two time series is calledwarping path. The warping path can be determined by implementing dynamic program-ming formulation, which is also called as step patterns. The formulation is provided inEquation 2.4. The warp path exploration begins from D(n, m) and backtracks by theevaluation of all neighboring cells from left, down, diagonally through the bottom left.If any of those neighboring cells hold the least values, and they are appended to theorigin of the warping path continuously till D(1,1) is reached.Figure 2.3 shows the execution of DTW to determine the warping path between thetemplate and the query time series. DTW algorithm begins from position D(0,0) andpropagates till the point at the highest position, which is D(n, m). At every positionD(i, j), the aggregated distance for each position is determined by taking the total sumof distance d(i, j) with the smallest aggregated distance of all the successor positions asgiven in Equation 2.4. A symmetric step pattern, where all the neighboring positionsengage on an equal basis. One DP formulation is assessed for all the positions, andthe cost matrix is formed with aggregated distance measures. It is forthwith feasibleto obtain the optimal warping path by backtracking from the position(n, m), Figure 2.3presents the potential warping path which can be achieved after backtracking the costmatrix. From the observations across the optimal warping path taken from the costmatrix, several enhancements have been proposed in the literature and introduced to asconstraints. 16igure 2.3: Dynamic Time Warping and warping path Boundary constraint
Boundary condition concentrates on the start and end of the warping path. It affirms thatthe first position of the warping point needs to be w = (1,1), and the last position shouldbe w k = (n, m), where n and m denote the duration of query and template time seriesrespectively. The warping paths that do not satisfy boundary conditions are usuallymarked as incorrect paths.Graphical illustration of boundary constraint is presented in Figure 2.4, where thesolid line originates from the first position of the cost matrix and ends on the last posi-tion w k = (n, m). At the same time, the dashed line begins from the primary positionbut ends at w k = (n - 1, m), which breaks the boundary constraint. Therefore it can betermed as an incorrect warping path. Continuity constraint
Continuity constraint takes care of maintaining a valid warping path. In other words,it makes sure the participation of every position in both query and template series. Itcan also be explained as; every cell in the cost matrix should be the restricted to itsadjacent cells, or the previous position of any point in the cost matrix must be ( i k − , j k ) , ( i k , j k − , ( i k − , j k − .Figure 2.5 explains the continuity constraint, and it can be observed that the pathrepresented by the dashed line is satisfying the boundary constraints but opposing thecontinuity constraint. An optimal path should obey all the constraints.Figure 2.5: Continuity constraint in Dynamic Time Warping18 onotonicity constraint Figure 2.6: Monotonicity constraint in Dynamic Time WarpingA proper warping path needs to be continuous and have valid boundary points andobligated to be monotonic. Monotonicity constraint demands that the points in thewarping path should have an increasing or nondecreasing trend. It can be said as awarping path cannot decrease in time, it can be straight or can be increasing, that is, i k ≥ i k − and j k ≥ j k − for all the steps.The warping path outlined by the dashed line in the Figure 2.6, meet all other DTWconstraint, however, has decreased in the time for a while, which is sufficient to ignorethem. An accurate warping path needs to satisfy all three above discussed constraintsthat are, continuity constraint, boundary constraint, and monotonicity constraint. In the thesis, we carried out experiments on three different EEG data collection equip-ments. In this section, the data acquation apparatus are explained in detail.
NeuroSky Mindwave Mobile [NeuroSky, 2019] consists of eight components, whichare ear arm, ear clip, battery area, adjustable headband, power switch, electrode arm,19lectrode tip, and think gear disk as shown in the Figure 2.7. The Mindwave mobileneeds a AAA battery that runs for 8 hours continuously. Bluetooth 2.1 is employed with1.0V minimum expected voltage, 10mA power consumption, and 10m connectivityrange. The electrode on the forehead identifies the electrical signal from the frontallobe of the human brain. The second electrode is an ear clip, which is employed as aground to sift out the electrical noise and the ambient noises. The Mindwave mobilecarefully estimates and outputs the EEG power spectrums such as alpha, beta, theta,delta, and gamma waves. Moreover, it transcends the traditional wet electrodes withthe conduction gel or saline for extended time EEG measurement [Lin et al. , 2010].Figure 2.7: Single electrode EEG by Mindwave MobileThe thesis utilizes NeuroSky Mindwave Mobile for several reasons. First, theproject strives to contribute to a low-cost practice, which can be employed by every-one. Second, its signal is transmitted in digitized form through Bluetooth [Blondet et al. , 2013].One principal lack is the exactness of the EEG signal obtained by NeuroSky Mind-wave Mobile because the NeuroSky Mindwave Mobile Mobile possesses only one elec-trode, FP1, which is in the frontal lobe. Another viable issue is comfort. Subjects reportit is uncomfortable to wear for a long time.20 .8.2 Four electrodes EEG setup
Muse [InteraXon, 2019] is a brainwave-sensing headband designed by InteraXon, Inc.that is marketed to customers as a system to improve attention, focus, and control. Theheadband weighs 57 grams and includes four recording electrodes, which are two elec-trodes on the forehead and two electrodes on the back of both ears. They are FP1 (leftforehead), FP2 (right forehead), TP9 (back of the left ear), and TP10 (back of the rightear). It does not need the application of conductive gel during the recording. Museshows EEG and transfers the data to a mobile or a computer through Bluetooth. Wire-less connection is carried through Bluetooth 2.1 + EDR, and data is sampled and trans-mitted at 500 Hz, electrodes are manufactured with Silver for FP1 and FP2, Conductivesilicone-rubber for TP9 and TP10. The battery is capable of running for 5 hours con-tinuously. Muse measures the behavior of five brainwaves: delta, alpha, beta, gamma,and theta, and shows the participant’s brainwave activity as active, neutral, or calm.Figure 2.8: Four electrodes EEG by MuseOne of the setbacks of this device is not capable of changing the wearer’s brainwavepatterns, reading his or her thoughts, or helping the wearer to move things using onlythoughts. 21 .8.3 128 electrodes EEG setup
EGI (now acquired by Philips) [Electrical Geodesics, 2018], [Philips, 2019] GeodesicSensor Net (GSN) is devised to obtain dense-array EEG data utilizing a Geodesic EEGSystem (GES) and shown in the Figure 2.9. The electrodes that make up the GSN 400are bound into a geodesic construction employing long-lasting polyurethane elastomerfibers that form the tension lines of various icosahedra. While the GSN 400 is expandedover a subject’s scalp, the electrodes produce electrical contact with the scalp and arecontinuously held in place. The tension is equally spread across all electrodes, givinga comfortable array that can be worn for a few hours. The scalp point of the elec-trode pedestal is expanded, making the pedestal to raise and level the scalp hair as it ispushed against the head. This setup provides a self-seating effect, so that the pedestalsponge, amidst its load of electrolyte, is included below the hair, immediately upon thescalp. All electrodes, including reference and solitary common, are held in the Net’sconstruction. This design makes the application of the Nets a quick process.Figure 2.9: 128 electrodes EEG by EGIStandard application times are shorter than 10 minutes with impedances in the 10to 50 k Ω range. The GSN 400 is designed for use with HydroCel Saline electrolyte,22hich is EGI’s standard potassium chloride saline and surfactant solution. Enclosingeach electrode pellet is a sponge which, during the execution of the GSN 400, is soakedwith HydroCel Saline electrolyte. The wet sponge swells from the one edge of thepedestal, and the lead cable rises from the other end into a small space in the pedestalcaplet. The electrode array is attached to an amplifier. The amplifier measures the EEGsignals that are picked up by the electrode array and samples them at milliseconds. Thescalp should be washed and kept dry while recording. A Brain-Computer Interface (BCI) is a systematic way that maps central nervous systemsignals and interprets the data to output, suitable for a machine to employ as an inputsignal. Alternatively, a BCI is a communication conduit that provides for primary con-trol of a machine using one’s thoughts. Although it is usually considered that the BCIas being a pretty new area, the term and idea "Brain-Computer Interface" was coinedby Jacques J. Vidal in 1973 earlier [Vidal, 1973]. The primary objective of a BCI is tomap brain signals, investigate and understand the determined data, and to transmute theinterpretation into operations [Dornhege et al. , 2007], [Wolpaw and Wolpaw, 2012].An illustration for a BCI setup could be a subject who is resting in the face of a pin-ball machine by the responsibility to control the flippers of the pinball device by his/herthoughts [Tangermann et al. , 2008]. The subject’s brain activity could be mapped usingEEG. During that, the subject wears an EEG net with electrodes, which are measur-ing the electrical activity on the scalp of the subject, produced by the brain activitiesbeneath the scalp, in the brain. The computer accepts continuous data from all EEGelectrodes and interprets the data. The computer can render the subject’s imagined leftor right hand action into a signal that classifies "right" or "left" whenever the subjectimagines the corresponding hand movement. The computer is attached to the pinballdevice. Whenever it receives the "right" or "left" signals, the right and left flipper flips.The subject can operate the pinball machine using BCI.Development of a BCI is dependent on three crucial tasks, namely, i) data acquisi-tion, ii) analysis of brain signal, and iii) converting the same to an appropriate action asrequested by the subject. In the given example, feedback is a flipper response. How-23ver, the feedback can occur in many shapes or colors or both. It can be the movingflipper of the pinball device or could be a prosthesis [Muller-Putz and Pfurtscheller,2007], virtual keyboards [Farwell and Donchin, 1988], [Maruthachalam et al. , 2018] oryet the steering wheel of an automobile [Zhao et al. , 2009]. The critical characteristicof feedback is that it transmutes the output signal into some desirable action.
Event-related responses are elicited responses despite the characteristics of the stimulus.To a particular event, such as an auditory, visual stimulus, and motor action, there existstimuli-specific brain responses. Unlike natural EEG, Event-related potentials (ERPs)needs averaging methods across trials to elicit useful information. Typically, the initialstep of the ERP method to identify the time of each event and mark them as epochs.Every epoch has a definite duration. After repeating the stimulus several times, theepochs are averaged across trials. The averaged event-related response can be obtainedat both individual and group levels. The classification of the ERP element can show thepolarity (negative or positive deflection), scalp distribution, and timing. ERPs have aprecise temporal resolution, and the human brain activity can be measured on a range oftens of milliseconds, and many aspects of perception and attention in operation [Wood-man, 2010]. One of the common challenges in ERP analysis is the misinterpretation ofthe relationship between the visible peaks and the underlying elements. A set of ruleswas proposed by [Luck, 2005] to avoid misunderstanding the relationship between thevisible peaks and the underlying elements.
The SSVEPs are brain responses that are accurately synchronized with flickering visualstimuli. When the subject pays attention to a visual stimulus, that can be a blinkingled, for example, a blinking LED, the response produced in the occipital lobe capturesquasi-sinusoidal rhythms that match the frequency of the blinking sinusoid. Conse-quently, frequency elements with the corresponding value of the stimulus frequencyand its harmonics are detectable using spectral analysis techniques. The strength of thefrequency element depends directly on how focused the subject is on the stimulus. The24dvantage of this method is that the SSVEP response is strong across diverse subjectsand usages that can provide relatively fast communication. Furthermore, this modeldoes not need a significant amount of training. A subject can produce a strong SSVEPresponse even without proper training [Graimann et al. , 2010]. However, the disadvan-tage of the SSVEP is that the subject needs to maintain a constant gaze at the blinks onthe screen, which can lead to fatigue.
Attention-based BCI serves by conferring different subjects with different stimuli. Suchstimuli can be auditory [Klobassa et al. , 2009], [Schreuder et al. , 2010], visual [Farwelland Donchin, 1988], [Citi et al. , 2008], [Allison et al. , 2010], or tactile [Muller-Putz et al. , 2006]. Each stimulus is correlated with a particular action, like the movement of aportion of a prosthesis, or the choice of a character from the alphabet. By concentratinghis/her attention on the desired stimulus and neglecting the others, the subject providesthe brain signal patterns that the BCI system requires to "interpret" his/her intention andperform the desired action [Dornhege et al. , 2007], [Wolpaw and Wolpaw, 2012]. Fromall the attention-based BCIs, the visual attention-based BCI is widely used. In this case,two various brain signal patterns are used, and they are event-related potentials (ERP)and steady-state visually evoked potentials (SSVEP). In ERP-based BCI, the stimuliare displayed successively and for a short amount of time. The span of a presentationis normally a few milliseconds and the time within two stimuli around 100ms. Whenthe particular stimulus is presented, and the subject focuses his/her attention on thestimulus, the subject generates a brain signal that is distinct from when he/she is notperforming a focused activity.The most prominent pattern is an event-related positivity in the region of centro-parietal areas around 300 to 500 milliseconds after the display of the stimulus. Thisphenomenon is called a P300 or P400 or P500 based on the time taken after the stimulus.The ERP-based BCI system can identify the ERP and therefore distinguish between the"targets," which are the stimuli the subject was attending, and the "non-targets." Incontrast to ERP based BCI, where the stimuli are displayed successively, in SSVEPbased BCI, stimuli are displayed continuously, all at the very same time, and flickeringwith various frequencies between 6-30Hz. The subject keeps his target by concentrating25n a particular stimulus. The flickering frequency of the stimulus produces SSVEPwith the corresponding frequency in the visual cortex region. For illustration, if thesubject focuses on a stimulus flickering with a frequency of 23Hz, the SSVEP willflicker with a frequency of 23Hz too. The BCI system can identify the stimulus thesubject was attending to by analyzing the frequency of the SSVEP with the frequenciesof the stimuli.Visual attention based BCI serves reliably over various subjects. A speller basedmodel can be built with visual attention based BCI. Nevertheless, visual models mightneed the subject to have control over his/her eye gaze in order to work accurately,which is not always addressed for patients, particularly people suffering from locked-insyndrome. A possible solution could be gaze independent visual spellers [Treder andBlankertz, 2010]. Another pretty solid problem with visual spellers is that several peo-ple despise the constant flickering on the display screen and find it extremely exhaustingto practice over an extended time.
When a muscle in the human body is voluntarily moved, there are variations in hu-man brain electrical activity in the motor cortex and sensorimotor regions. This isreferred to as sensorimotor rhythm (SMR). These variations are comparatively local-ized, following the homuncular structure of the cortical area [Woolsey et al. , 1979].The fall of oscillations is called event-related desynchronization (ERD) and usuallyarises during the movement or the preparation of movement. The rise of oscillationsis called event-related synchronization (ERS) and usually arises after the movement orrest [Pfurtscheller and Da Silva, 1999]. Imagining such physical movements providesubstantially similar ERD or ERS signal patterns as the real movements would, and it isdiscussed in [Pfurtscheller and Neuper, 1997]. Motor imagery-based BCI serves by ex-pecting the subject to imagine the physical movement of particular limbs, like grippingwith the left or right hand or relocating the feet and measuring the ERD/ERS signalpatterns across the corresponding cortical areas. Matching a specific imagined limbmovement with a distinct action that gets performed helps in building motor imagery-based BCI. Although the motor imagery model resembles extremely natural opposedto the attention-based BCI, the number of muscles a person can move and the num-26er of complicated actions each muscle can perform makes motor imagery based BCIchallenging to build.Opposed to attention-based BCI, motor imagery based BCI has a tremendous errorrate. About 15% to 30% of subjects are not capable of obtaining control employingmotor imagery based BCI without proper training [Blankertz et al. , 2010 a ]. Anothercritical limitation for healthy subjects is that one cannot practice motor imagery-basedBCI while doing something else. Since every movement the subject performs createsERD or ERS signal patterns, the subject has to avoid any movement in order to practicemotor-imagery based BCI accurately. The most critical application or the purpose of BCI is assistive technology for severelyphysically impaired people. The standard instance, found in numerous BCI publica-tions, are subjects who suffer from Amyotrophic Lateral Sclerosis (ALS) [Charcot,1874] a neurodegenerative medical condition that gradually paralyzes the victim un-til he/she is utterly locked-in in his/her paralyzed form. In the latter stage, the patient isconscious, however incompetent to twitch even a single muscle in his/her body (locked-in syndrome). With the help of BCI, we can implement spellers [Farwell and Donchin,1988], [Birbaumer et al. , 1999] that enable victims to communicate to the outer worldwithout any or with minimal muscular movement. Along with communication, mobil-ity is another critical application for BCI. Stroke victims might use BCI to visualizethe current state of the brain in order to determine how to suppress undesired signalpatterns [Daly and Wolpaw, 2008]. Alternatively, subjects with spinal cord lesions canuse BCI to command a wheelchair [Galán et al. , 2008], a telepresence device [Escolano et al. , 2010], or even a prosthesis [Birbaumer and Cohen, 2007]. Nevertheless, the lowinformation transfer speed of a non-invasive BCI is the principal barrier for compositeapplications using BCI. For illustration, a hand prosthesis that is managed through BCIwill typically not enable the subject to achieve low-level movements like specific thumbcontrol, but solely a small collection of high level controls like "open hand" and "grasp."A subject who is capable of twitching some muscles in his/her body voluntarily will of-ten be quicker and more reliable using those muscles to manage a device than practicing27CI [Mak and Wolpaw, 2009]. Apart from assistive technology, BCI has further foundimportance in various fields [Blankertz et al. , 2010 b ]. Researches have assessed howBCI technology can be applied as calibration equipment to measure mental states likeattention [Schubert et al. , 2008], [Haufe et al. , 2011], or workload [Kohlmorgen et al. ,2007], [Müller et al. , 2008], [Venthur et al. , 2010] in order to predict and perhaps pre-vent human errors in critical circumstances. BCI technology is employed for qualityevaluation by measuring the subconscious comprehension of noise in visual or auditorysignals [Porbadnigk et al. , 2010], [Porbadnigk et al. , 2011]. BCI can further be em-ployed in entertainment and gaming [Krepki et al. , 2007], [Nijholt et al. , 2009], and thegaming business has commenced producing games which are "mind-controlled." As we discussed in Section 2.5, a minimal muscular artifact can influence the EEGsignal to a large extent. So, most of the studies are aimed to reduce or eliminate theartifacts in the EEG signal. Taking advantage of the influence of artifacts on the EEGsignal, an attempt is made to detect artifacts in the EEG signal, and build a BCI usingthe artifact. In the following chapters, simple, but effective threshold-based artifactdetection and robust time-warping techniques to classify the artifacts are proposed; withthe aid of detection and classification models, effective and easy to use BCIs are builtand demonstrated.
Summary
In this chapter, we discussed the basic physiology of the human brain, electrical activityin the cerebral region of the brain, and methodologies to captured the electrical activity.We reviewed the various bands in the electrical activity of the brain. We discussed theartifacts and their types. We briefly studied the linear and dynamic time warping tech-niques. We explained the fundamentals of Brain-Computer Interfacing and its literature.We briefed the importance of the proposed artifacts-based Brain-Computer Interfacing.28
HAPTER 3Time Warping solutions for classifications of artifacts3.1 Introduction
In the previous chapter, we briefly discussed the proposed artifacts classification basedBrain-Computer Interface (BCI). In this chapter, we study the effect of minimal muscu-lar artifacts such as a head turn, a head nod, a jaw movement, and an eye-blink and theirdata collection process. It is observed that the artifacts have temporal signatures. Wetake the benefit of these temporal signatures and develop methods to detect and classifythem.Section 3.2 deals with past work on analysis of artifacts. Section 3.3 describes theexperiment setup and data collection techniques. Preprocessing of the acquired EEGdata is discussed in Section 3.4. The detection of artifacts in the preprocessed EEGsignal is detailed in Section 3.5. Sections 3.6, 3.7, and 3.8 discuss the classification ofartifacts with Linear and Dynamic Time Warping techniques. Section 3.9 explains themethodologies to detect and classify the artifacts in a continuous EEG signal.
EEG signals comprise some valuable information about brain [Anderson et al. , 1995].In addition to this valuable erudition, EEG also apprehends artifacts, which are assumedto be undesirable electrical potentials that arise from non-cerebral sources. Artifactsare addressed by first finding and eliminating the EEG signal segment from the study[Whitton et al. , 1978]. This approach could drive to a loss of valuable data in the EEGsignal. An alternative strategy would be to diminish the impact of the artifact in theEEG signal. Occasionally, the artifacts are employed to develop BCIs [Maruthachalam et al. , 2018], [Ma et al. , 2014], Prof. Hawking used cheek twitches to convey messagesto the world [haw, 2012]. In [Nolan et al. , 2010], an automated threshold on the am-plitude of the EEG signal based artifacts rejection procedure was introduced. Wavelet,urtosis, and Renyi’s entropy-based examination to identify the artifacts in the EEGsignal were presented in [Inuso et al. , 2007]. In [Rohál’ová et al. , 2001], techniquesfor EEG artifacts detection based on Kalman filter autoregressive model and radial ba-sis function neural networks were introduced. All the procedures discussed above areagnostic to the class of artifacts in the EEG signal. Our work not only discovers theartifact; it also classifies the type of artifacts. Independent Component Analysis (ICA)in EEG signals has been broadly practiced for artifacts elimination and can be found in[Jung et al. , 1998], [Mammone et al. , 2012], [Winkler et al. , 2011]. An effort has beenmade to identify and eliminate blink artifacts and eyeball movement from EEG signaldata applying blind component separation in [Joyce et al. , 2004]. In [Jiang et al. , 2007],detection and elimination of heartbeat artifact from EEG signal data utilizing waveletanalysis was introduced. In [Brunner et al. , 1996], muscular artifacts were detectedusing spectral analysis was proposed. In this chapter, we propose a novel thresholdingmethod to detect the artifacts in the EEG signal. Following the artifacts detections, timewarping algorithms such as Linear Time Warping (LTW) and Dynamic Time Warping(DTW) are adopted to classify the artifacts.
Electroencephalography (EEG) signals were acquired using saline-based 128 electrodesEEG array setup built by Electrical Geodesics, Inc (EGI) [egi, 2018]. Clean and driedEEG net was carefully immersed and soaked in Potassium Chloride electrolyte for fiveminutes. After five minutes, the EEG net was carefully placed and covered the entirescalp of the subjects. The EEG net was connected to an amplifier. The output of theamplifier was fed to a computer machine to quantize and store the output EEG signal.EEG signals were acquired at a sampling rate of 250 Hertz. The impedances of allthe electrodes were maintained below k Ω during the experiment. Throughout theexperiment, the subjects were asked to keep their eyes closed unless explicitly stated inthe instruction. The experiments were conducted in a controlled setting where explicitinstructions were given to the subjects. Subjects were expected to make the muscularaction corresponding to a given artifact instruction voluntarily. Subjects were requestedto perform four kinds of muscular movements as follows, mouth open and close, headnod, eye open and close, and head turn left and right.30nstructions were given to the subject with a loudspeaker, in a random order, to per-form the muscular movement. Before every trial, the subject was given two secondstime interval to rest. The EEG signal corresponding to the resting time is considered asthe baseline or resting state EEG signal. The timeline for data acquisition throughout atrial, with the resting state, is shown in Figure 3.1. Subjects were asked to pay attentionto the instruction entirely and make a muscular movement accordingly. Three secondsof time was given to the subjects to do the muscular actions after listening to the in-struction playback. Following the muscular movement, the subjects were requested toproduce a mouse click. This mouse click is to guarantee that they indeed made themuscular movement. These controlled experiments were conducted with nine subjects.Figure 3.1: Illustrative timeline of an artifact acquisition trialOut of the nine subjects, four subjects appeared for another session after six months.All the subjects were acquainted with the purpose and scope of the research, and signedconsent was also received to acquire their EEG data . An average of 22 trials wasobtained in each session.EEG artifacts of two separate subjects are presented in Figure 3.2, and from thefigure, it can be observed that the temporal signatures of various artifacts are differenteven across subjects . Collected EEG data signals were filtered with 0.3 to 60 Hz bandpass filter, and a notchof 50 Hz was employed to overcome the line noise. The resultant EEG signals weremean-centered. Three seconds periods provided for artifact production were extracted The Ethics Committee of Indian Institute of Technology Madras approved the study. The EEG setup is supported by the project, CSE/12-13/132/UPFX/HEMA. Supplementary plots for artifacts of multiple subjects are accessible in http://bit.ly/EEGPlot.
Artifacts in the EEG signal influence and exhibit large amplitudes in the electrodes.Utilizing this understanding, we devise a threshold-based procedure to detect the arti-facts in the epoched EEG signal. Within a three second time window, a subject vol-untarily performs the expected artifact. However, the subject has the liberty to initiate32igure 3.3: Preprocessing of Electroencephalographical Signaland windup the artifact anytime in three seconds. Moreover, the duration of each arti-fact might not be the same. To detect the onset and completion of the artifacts in thegiven three seconds window, we introduced a threshold-based technique. The energyof epoched three seconds EEG signal from all the 128 electrodes was estimated. Themean energy of the 128 electrodes EEG signals was calculated. In order to smoothenthe resultant mean energy signal, we applied a moving average filter of length 100 sam-ples (0.4 seconds). This signal proffers a signature of the artifact and will be mentionedhereafter as “artifact-signal”. Using a threshold on the energy of this smoothed EEGsignal, the location of the artifacts in the EEG signal was first localized. The onset andcompletion of the artifacts in the EEG can be detected by,
T hreshold = mean ( “artifact-signal” ) + η ∗ σ ( “artifact-signal” ) where η is a hyper-parameter and σ is the standard deviation of “artifact-signal”. Empir-ically, we determined that the hyper-parameter η = − operates adequately for this task.The extent in the amplitude of the smoothened mean energy of the EEG signal passesthe threshold determines the onset and completion of the artifacts in the epoched EEGsignals. Figure 3.4 illustrates this process. Once the onset and decay of the artifacts inthe EEG have been identified, the type of artifact can be reliably identified using timewarping techniques. The time duration of each artifact can differ from trial to trial andperson to person. So it is crucial to use time warping. Figure 3.2 presents the distinctionin the time duration of the artifacts for various subjects.33igure 3.4: Illustration of artifact detection using 128 channel EEG During data acquisition, the subjects were instructed to produce the artifacts in a spanof seconds time window. In the last section, we discussed the technique to detect theonset and completion of the artifacts in the three seconds EEG signal window. In thissection, we discuss the methodologies used to classify the discovered artifacts effec-tively. Since the subject can initiate and complete the artifacts anytime and time takenfor artifacts is of varying duration, we adopted time warping techniques to classify theartifacts.Table 3.1: Number of subjects and trials used in the detection and classification of arti-facts in 128 electrodes EEG Number of Subjects Number of TrialsTrain Test Train TestSingle Session
Inter Sessions
Inter Subjects .7 Linear Time Warping algorithm
Linear Time Warping (LTW) algorithm is a method in sequential pattern recognitionwhere both series are interpolated to the same or fixed length and classified based onany distance metrics like Euclidean distance of the series. The onset and the completionof an artifact in the EEG signal are first identified using the threshold method. Never-theless, the duration of the artifacts can differ from time to time. We firstly interpolatetest and reference series to the max length of the test and the reference series and thenexamine using Euclidean distance. Before comparing the test and reference series, thevariability in the amplitude was normalized. Finally, LTW distance is employed as ascore to classify the artifact of a specific type using distance-based k -Nearest Neighbourclassifiers, as given in Algorithm 1. Input: evaluationTrial, referenceTrials, noOfNeighbours
Result: predictedLabelpredictedLabel := nil;costFromReferenceTrials[] := Inf; forall referenceTrial in referenceTrials do maxLength := max(len(evaluationTrial), len(referenceTrial));evaluationTrial := linearInterpolate(evaluationTrial, maxLength);referenceTrial := linearInterpolate(referenceTrial, maxLength);costFromReferenceTrials[referenceTrial] := ||evaluationTrial -referenceTrial||; end sort(costFromReferenceTrials);predictedLabel := argmin (costFromReferenceTrials[1 : noOfNeighbours]); Algorithm 1:
Linear Time Warping algorithm based artifacts classification
Intra-Session artifacts classification
In this experiment, single-session data of all nine subjects were utilized. From everysession, random of the trials were utilized as the reference series, and the remainingare employed as the test series. The outcome of this classification for different values of k is provided in the Table 3.2. We observed that the classification accuracy of eye-blink35s highest, followed by jaw movement, head nod, and head turn. We also observed thathead nod and head turn were difficult to distinguish.Table 3.2: Artifacts classification results of Linear Time Warping in Intra-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyLTW 1 Inter-Session artifacts classification
In this subsection, the four subjects’ data with multiple sessions have been utilized.The arbitrarily taken session data of every subject was utilized as a reference series,and another session data was utilized as a test series. The outcome of this agreement isshown in the Table 3.3. The records in the table show the individual accuracy of eachartifact, obtained by the four-class classification models, accompanying with the totalaccuracy of the designed classification model.Table 3.3: Artifacts classification results of Linear Time Warping in Inter-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyLTW 1 Inter-Subject artifacts classification
In this subsection, artifact signatures were examined across subjects. From all the ninesubjects, four subjects’ EEG signal data were arbitrarily picked as the reference series,and the remaining five subjects’ EEG signal data were utilized as the test series. Theoutcome of this agreement is presented in the Table 3.4. The records in the table showthe individual accuracy of each artifact, obtained by the four-class classification models,accompanying with the total accuracy of the designed classification model.36able 3.4: Artifacts classification results of Linear Time Warping in Inter-Subject k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyLTW 1 From the Tables 3.2, 3.3, and 3.4, we observe that there is a graceful degradation inperformance. Nevertheless, the performance is much better than chance. This suggeststhat artifact signatures are indeed present in the EEG signal.
In contrast to LTW, Dynamic Time Warping (DTW) algorithm attempts to match anon-linear warp between the test and the reference series [Sakoe and Chiba, 1978].This warping between two time series can be utilized to ascertain the similarities anddifferences between the two time series by computing the DTW distance. The DTWdistance is employed as a score to classify the artifact of a specific type using distance-based k -Nearest Neighbour classifiers, as given in the Algorithm 2.The following sections deal with variants of the DTW algorithm and their vocationto classify the artifacts in the EEG signals. The data split of artifacts in EEG signals forclassifications is identical to that of experiments on LTW, as mentioned in Section 3.7. A simple Dynamic Time Warping computes the DTW distance between two given se-ries. The following equation determines the DTW distance of two signals. r i,j = dist (¯ x i , ¯ y j ) + min ( r i − ,j − , r i − ,j , r i,j − ) (3.1)where ¯ x i and ¯ y j are features of the two time series signals compared, dist (¯ x i , ¯ y j ) isthe Euclidean distance between ¯ x i and ¯ y j . r , is initialized with 0 and remaining r i,j are initialized with infinity. The signals are normalized before comparing. Finally,37 nput: evaluationTrial, referenceTrials, noOfNeighbours Result: predictedLabelleastCost := Inf;predictedLabel := nil;costFromReferenceTrials[] := Inf;refLength := length(referenceTrial);evalLength := length(evaluationTrial); forall referenceTrial in referenceTrials dofor iter1 := 1 to refLength do DTW[iter1, 0] := Inf; endfor iter2 := 1 to evalLength do DTW[0, iter2] := Inf; end
DTW[0, 0] := 0; for iter1 := 1 to refLength dofor iter2 := 1 to evalLength do cost := d(referenceTrial[iter1], evaluationTrial[iter2]);DTW[iter1, iter2] := cost + min (DTW[iter1 - 1, iter2 ],DTW[iter1, iter2 - 1], DTW[iter1 - 1, iter2 - 1]); endend costFromReferenceTrials[referenceTrial] := DTW[refLength, evalLength]; end sort(costFromReferenceTrials);predictedLabel := argmin (costFromReferenceTrials[1 : noOfNeighbours]); Algorithm 2:
Dynamic Time Warping algorithm based artifacts classificationDTW distance is employed as a score to classify or detect the artifact of a specific typeusing distance-based k -Nearest Neighbour classifier. The classification accuracies ofintra-session, inter-session, and inter-subject are shown in the Tables 3.5, 3.6, and 3.7respectively. The records in the tables show the individual accuracy of each artifact,obtained by the four-class classification models, accompanying with the total accuracyof the designed classification model. 38able 3.5: Artifacts classification results of vanilla Dynamic Time Warping in Intra-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyVanilla DTW 1 Table 3.6: Artifacts classification results of vanilla Dynamic Time Warping in Inter-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyVanilla DTW 1 The normalized Dynamic Time Warping (DTW) is one of the variants of the DTW al-gorithm. The DTW matrix computation is identical, as mentioned in subsection 3.8.1.However, the final DTW distances are normalized by the length of the warping path.The normalized DTW distance is employed as a score to classify the artifact of a spe-cific type using distance-based k -Nearest Neighbour classifier. The classification accu-racies of intra-session, inter-session, and inter-subject are shown in the Tables 3.8, 3.9,and 3.10 respectively. The records in the tables show the individual accuracy of eachartifact, obtained by the four-class classification models, accompanying with the totalaccuracy of the designed classification model. The time synchronized Dynamic Time Warping (DTW) computes the DTW distancebetween two given series. The following equation determines the DTW distance of twosignals. r i,j = dist (¯ x i , ¯ y j ) + min ( r i − ,j − , r i − ,j ) (3.2)where ¯ x i and ¯ y j are features of the two time series signals compared, dist (¯ x i , ¯ y j ) is theEuclidean distance between ¯ x i and ¯ y j . The initialization of the r matrix is as same as39able 3.7: Artifacts classification results of vanilla Dynamic Time Warping in Inter-Subject k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyVanilla DTW 1 Table 3.8: Artifacts classification results of normalized Dynamic Time Warping inIntra-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyNormalized DTW 1 mentioned in Subsection 3.8.1. Here, the warping is restricted to vertical and diagonal.The signals are normalized to zero mean and unit variance before comparing. Theestimated DTW distance is employed as a score to classify or detect the artifact ofa specific type using distance-based k -Nearest Neighbour classifier. The classificationaccuracies of intra-session, inter-session, and inter-subject are shown in the Tables 3.11,3.12, and 3.13 respectively. The records in the tables show the individual accuracy ofeach artifact, obtained by the four-class classification models, accompanying with thetotal accuracy of the designed classification model.From the Tables 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, and 3.13, it is evident thatthe normalized DTW outperformed the task in hand. This phenomenon is an outcomeof the fact that the normalized DTW takes the length of the sequences into considerationand whereas other variants of the DTW are indifferent to the length. For example, iftwo arbitrary artifacts are shorter in duration, the DTW distance between those wouldbe smaller, irrespective of the class, which is a downside to the kNN classifier.From the tables 3.8, 3.9, and 3.10, it can be remarked that the proposed methodclassifies the artifacts with high accuracy of . while all sessions are utilized forboth training and testing. The accuracy drops to . while tested across sessionsand further decreases to . during the test across subjects. It is crucial to note thatonly the “Head Nod” class is classified with reduced accuracy. The principal cause40able 3.9: Artifacts classification results of normalized Dynamic Time Warping inInter-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyNormalized DTW 1 Table 3.10: Artifacts classification results of normalized Dynamic Time Warping inInter-Subject k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyNormalized DTW 1 for this phenomenon is that both head turn and head nod affect identical collection ofelectrodes (in Figure 3.2), and consequently, their signatures are not classifiable acrosssessions and subjects. Furthermore, LTW provides more reliable efficiency in the inter-session and inter-subject environment. This result emphasizes that the time warping inthe EEG signal artifacts is more linear rather than non-linear. In the earlier sections, we used the information about seconds of artifacts window,where the subject was informed to make the artifacts, were extracted as epochs, andthreshold-based detection for the onset and decay of the artifacts were employed. How-ever, in the real world setup, prior information about the occurrence of artifacts wouldnot be accessible. So, in this section, we attempt to tackle a perplexing problem ofdetection and classification of artifacts in the continuous EEG signal.In this section, the entire acquired EEG data was used, and the onset and comple-tion of all the artifacts were identified using the threshold technique, as discussed inSection 3.5. The detected artifacts were considered to be true if the period betweendetected onset and completion of the artifacts overlie 60% or more with the secondsof artifacts ground truth window. The hyper-parameter η was tuned, and it was found41able 3.11: Artifacts classification results of time synchronized Dynamic Time Warpingin Intra-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyTime Synchronized DTW 1 Table 3.12: Artifacts classification results of time synchronized Dynamic Time Warpingin Inter-Session k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyTime Synchronized DTW 1 that a value of . and . operates most reliably in the setup, and this has been givenin the table 3.14. In the rest of the subsection, η = . was accepted.From the last section, it was evident that the normalized Dynamic Time Warping(DTW) works well among it’s other variants. So, we attempted normalized DTW alongwith Linear Time Warping (LTW) for comparison in this section. The outcomes are pre-sented for Intra-Session , Inter-Session and
Inter-Subject conditions in Tables 3.15, 3.16,and 3.17, respectively. From the Table 3.15, it is obvious that the proposed model workswell for seen subjects with efficiency around 80%. The performance of inter-sessions and inter-subjects are presented in the Tables 3.16 and 3.17. From the outcomes, it isperceived that despite the performance depravity in inter-sessions and inter-subjects ,the pattern of the EEG artifacts is consistent. Furthermore, given that the efficiency for
Intra-subject , the artifacts can be productively employed in BCIs.Table 3.13: Artifacts classification results of time synchronized Dynamic Time Warpingin Inter-Subject k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyTime Synchronized DTW 1 η -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 F1 η F1 % k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyLTW 1 Normalized DTW 1 Table 3.16: Artifacts classification results of Linear Time Warping and normalized Dy-namic Time Warping in Inter-Session after detection k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyLTW 1 Normalized DTW 1 Table 3.17: Artifacts classification results of Linear Time Warping and normalized Dy-namic Time Warping in Inter-Subject after detection k JawMovement HeadNod HeadTurn EyeBlink ModelAccuracyLTW 1 Normalized DTW 1 Intra-Session study. How-ever, LTW outperforms the vanilla DTW in
Inter-Session and
Inter-Subject analysis.Normalized DTW yields remarkable results in
Intra-Session setup. Despite that, LTWoutperforms the normalized DTW. Time synchronized DTW yields comparable resultsin varying neighbors in the classifier. So, the metric space of time synchronized DTWis robust. However, the normalized DTW and LTW outperform it. After detecting theonset and completion of artifacts, the performance of LTW was better in
Inter-Session .Moreover, in the case of
Inter-Subject setup, LTW outperforms normalized DTW withminimal neighbors, and normalized DTW surpassed LTW when the neighbors in theclassifiers were increased.
Summary
This chapter proposed an intelligent threshold-based detection method to identify theonset and completion of the artifacts in the EEG, LTW, and DTW distance-based meth-ods for classifying artifacts. It was also evident that LTW and normalized DTW worksbest in the classification of artifacts. Further, the proposed models were found to besound to analyze and classify the EEG artifacts of unseen subjects for three classes.In the subsequent chapters, we discuss developing a feasible interface for speech andmotor challenged. 44
HAPTER 4Brain-Computer Interface using artifacts signatures inElectroencephalogram4.1 Introduction
People with a speech impediment as a consequence of medical conditions like paral-ysis/cerebral palsy need personal assistant devices that understands and interpret theirmotor-disabled gestures to interact or communicate with the external environment. As-suming a person is capable of employing at least one muscle to interact, in the contem-porary state of technology, muscle-based interfaces are favored, due to the sparse datatransfer speed of Brain-Computer Interfaces (BCI) [Nicolas-Alonso and Gomez-Gil,2012]. As discussed in the previous chapter, artifacts can be prominently recognizedin the EEG signal and can be efficiently predicted using simple time series and timewarping analysis.In this chapter, we propose two working BCIs with varying numbers of electroen-cephalography (EEG) electrodes setup and a smartphone. The first proposed BCI isdeveloped by employing an eye blink detector as an input mechanism. The second BCIuses eye blink and jaw movement as the input mechanism. A person may be capable ofcommunicating with smartphone applications to get simple chores completed. In partic-ular, mapping EEG artifact signal patterns to the keyboard activities can be employed toproduce words and sentences. In order to improve the efficiency of the proposed BCIs,we use word completion models to decrease the number of artifacts required. The sys-tem communicates with a text to speech synthesis (TTS) system and outputs the speechcorresponding to the word the user expected to deliver.The remainder of the thesis is organized as follows. Section 4.2 briefs the conven-tional T9 keyboard. Section 4.3 outlines the Google trillion word corpus, which is usedas the dictionary of the proposed BCIs. Section 4.4 describes the Android TTS systememployed in the BCI. Section 4.5 briefs the types of eye blinks. The first proposedye blinks based BCI is explained in the Section 4.6. Section 4.7 discusses the secondproposed BCI, which uses eye blinks and jaw movements.
T9 stands for Text on nine keys. T9 is a predictive text technology for mobile phonesthat comprise a 3x4 numeric keypad. The objective of the T9 keyboard is to makeit simpler to record text information. It enables the vocabularies to be accumulatedby a single keypress for each character, which is a tremendous advancement over themulti-tap method used in traditional mobile phone text insertion, in which several let-ters are associated with each key, and choosing one letter frequently demands multiplekeypresses.T9 consolidates the collections of characters on each phone key, including a fast-access dictionary of vocabularies, powered with Web 1T 5-gram Version 1, contributedby Google Inc [Brants and Franz, 2006]. It looks up in the dictionary all vocabulariesresembling the sequence of keypresses and place them by frequency of use. For exam-ple, in English, 4663 matches "good", "home", "gone", "hood", etc. Such sequences areidentified as textonyms; "home" is considered as a textonym of "good." T9 is encodedto favor the word that to be the most common "textonym", such as "good" over "gone"or "home", "hand" over "game", or "bad" over "ace" or "cad". While the user insertsmatching keypresses, along with vocabularies and stems, the system further providesword completions. On a mobile phone with a numeric keypad, every time a key ispressed, the algorithm yields the letters that are most suitable for the keys pressed atthat time. For instance, to insert the term ’the’, the user would press 8, followed by 4,and 3; the display would present ’t’ followed by ’th’ then ’the.’ Once the less-commonterm "Felix" is planned, despite entering 33549, the display exhibits ’E,’ followed by’De,’ ’Del,’ ’Deli,’ and ’Felix.’ This phenomenon is an illustration of letters switchingwhile inserting the words.
Google Research uncovered a word n-gram model for a family of Research and De-velopment projects, such as speech recognition, statistical machine translation, spelling46orrection, information extraction, entity detection, and so on. Usually, such modelshave been computed from training corpora comprising of several billion vocabularies,and Google has been adopting the immense potential from data centers and distributedprocessing to prepare more extended training corpora. Google scaled up the volume oftheir data by orders of magnitude rising in a training corpus of one trillion words frompublic web pages. The dataset holds 1,024,908,267,229 vocabularies of streaming textwith 13,588,391 distinct vocabularies, after dropping vocabularies that occur less than200 times. The distinct vocabularies were sorted based on the number of occurrences.Based on the textonyms, the best five vocabularies were used to build the dictionary forthe proposed BCIs.
Text-to-Speech (TTS), also known as speech synthesis, in Android, is a powerful featurethat can be used to supplement the Android applications. This extension will increasecomfort, and the applications become more beneficial. The devices or applicationsemploying TTS technology embrace a wide variety of fields such as education, screenreaders, mobile technologies, disabilities, and communications. The proposed BCIsutilize Android Text To Speech synthesizer API 4 [Google, 2018] for speech synthesis,and it is detailed in the following sections.
A single blink is defined by the closing and opening of the eyelid. It is a fundamentalpurpose of the eye that nourishes spread tears over and eliminate irritants from thefacade of the cornea. There are three kinds of eye blinks, as discussed below.
Spontaneous blink
Spontaneous blinking is performed without any external or visible stimuli and internaleffort. This kind of eye blinking is carried in the pre-motor brain stem and occurswithout intentional efforts. Humans blink their eyes about 15 times per minute.47 eflex blink
A reflex blink happens in response to an outside inducement. A few of the examplesfor the inducements are contact with the cornea or objects that arrive quickly in front ofthe eye. A reflex blink occurs faster than a spontaneous blink.
Voluntary blink
Voluntary blink is a conscious eye blink, which has a larger amplitude than a reflexblink. Voluntary eye blink is used for building the BCI.
In this section, we discuss the eye blink artifact-based BCI using a single electrodeelectroencephalogram (EEG) device.
In this section, the proposed BCI employs a Mindwave mobile+ [NeuroSky, 2015] ma-chine, with a Bluetooth interface for EEG data retrieval at a sampling rate of 512 Hz.The raw signal obtained from the device is quantized potential difference estimatedbetween frontal and earlobe, as discussed in Section 2.8.1.
Since the raw EEG signal is acquired using Bluetooth connection, noise may exist in theEEG signal. To eliminate the noise, the moving average ( M n ) is employed to smoothenthe signal. M n = Σ ni = n − m t i m where t i is the quantized EEG signal at the time instant i , n is the current time instant,and m is the number of samples in the EEG employed for moving average. Empirically48e determined that m = 50 operates best. An illustrative plot of raw and smoothedEEG signal is shown in the Figure 4.1.Figure 4.1: Raw EEG signal and corresponding moving average A small misplacement of the EEG electrode in the scalp may impact the robustness ofthe signal to a more substantial degree. The first twenty seconds are utilized to calibratethe personalized threshold for every subject.Personalized Threshold , P t = µ M n + (2 × σ M n ) where µ M n and σ M n are the mean and standard deviation of moving average M n re-spectively. Whenever the amplitude of the EEG signal crosses the estimated personal-ized threshold, the model predicts it as an occurrence of the eye blink. An empirical study of these samples reveals that a human eye blink is shorter than500 ms, and the time pause between two voluntary eye blinks shorter than 1000 ms.The moving average ( M n ) for an interval of 1000ms is obtained. Once it passes thethreshold, a timer thread is begun with a waiting interval of 1000 ms. Each time M n passes the threshold, blink count is incremented, and the timer is reset to 1000 ms.Once the timer is expired, the number of eye blinks is delivered to the interface. The49rocedure for the eye blinks predication is given in Algorithm 3. Result: noOfBlinksnoOfBlinks = 0;timer = 1000 ms; while timer not elapsed doif M n with positive slope crosses P t then noOfBlinks++;Reset timer with 1000 ms; endend Algorithm 3: Algorithm for eye blinks countEmpirically, it is observed that two standard deviations from the mean are useful inpredicting the eye blinks accurately.
Figure 4.2: Screenshot of virtual T9 prediction keyboardT9 stands for Text on Nine keys, practices nine keys to serve the English alphabet, asin the Figure 4.2. It does not need the multi-tap method employed in a conventional mo-bile phone keyboard. The virtual T9 keyboard is partitioned into four regions, namelykeypad region, suggestion region, current word region, and phrase region. Fundamental50avigation is aided by highlighting each key cyclically with a three seconds timer. Thehighlighting method starts from the keypad region and travels to different regions basedon the user input eye blinks. In the keypad region, the user can pick the highlighted keyby reacting with two voluntary eye blinks. Once a character is chosen, the characteris appended to the current word region, and five suggestion words are presented in thesuggestion region. These top five words are predicted using Google web trillion-wordcorpus, which was discussed in Section 4.3. The words in the suggestion listing willfurther be highlighted following the other with a three seconds timer. Again the usercan pick any one of the highlighted words from the suggestion listing with two volun-tary eye blinks. The chosen words will be stored in the phrase region. The backspacebutton will be highlighted for three seconds. Blinking twice in that time will eliminatethe last character in the current word region. Then the phrase region will be highlightedfor three seconds. If the user makes two voluntary eye blinks in these three seconds,the words in the phrase region are sent to a TTS system, and the synthesized speech isplayed, as discussed in Section 4.4. An illustrative flow chart for the process is shownin the Figure 4.3. This process happens infinitely till the application exits . The configuration of the virtual ABC keyboard, presented in Figure 4.4 is comparableto the design of the virtual T9 keyboard. Virtual ABC keyboard includes of all thefeatures as in the virtual T9 keyboard, accompanying with the convenience to speaknondictionary words, say names. The embodiment of processes of highlighting, typing,partial word completion model, navigating into suggestion list, picking a recommendedword, affixing words to the phrase region, and presenting the TTS synthesized speechfrom the phrase region by eye blinks discussed in the subsection 4.6.5.
In the previous section, we discussed the proposed eye blink based BCI with a sin-gle electrode EEG. However, people with partial motor disabilities can make extended A demo video is available in https://goo.gl/PSrsYe.
In this section, a four-channel muse EEG headset [cho, 2018] is used to acquire andstream the brain EEG signal, with a wireless Bluetooth connection to transfer the EEGsignal to the Android mobile phone. A brief preface about the muse EEG headset was52igure 4.4: Screenshot of virtual ABC prediction keyboardpresented in subsection 2.8.2. The acquired EEG signal was preprocessed, as discussedin subsection 4.6.2.
The crucial part of building artifact-based BCI is to detect the presence of the artifactsin the EEG signal and predicting the type of artifact. A simple yet powerful thresholdtechnique is used to detect the artifacts, and Dynamic time warping (DTW) is employedto classify the type of the artifact. The application is powered with a couple of artifactsfor eye blink and jaw movement, which works as the reference sequence in DTW dis-tance estimation.
Detection
The Android application is equipped with a template for eyeblink and jaw clench. Thethreshold for the artifacts is estimated from template sequences, as discussed in theSection 3.5, with the difference that the number of electrodes in the muse EEG devicebeing four. Whenever the realtime EEG crosses the threshold, we count the event as anappearance of the artifact. 53igure 4.5: Flow chart of virtual prediction keyboard with eye blinks and jaw clenches
Classification
Once the artifact is detected, a chunk of EEG signal with a duration of 500 ms startingfrom the detection of the artifact is extracted. The extracted EEG signal is classifiedto obtain the type of artifact using DTW based classification technique, as mentionedin the Section 3.8. Based on the type of artifact, the actions are mapped in the virtualkeyboards, and it is discussed in the following sections.54 .7.3 Virtual Prediction Keyboards
The brief review about T9, ABC keyboards, their importance, developed T9, ABC vir-tual keyboards, and its four regions, the navigation of each key, the operation of pickingup the highlighted key with two consecutive eye blinks, character affixation, sugges-tion words mechanism, backspace mechanism, were discussed in subsections 4.6.5 and4.6.6.In contrast to the phrase region being highlighted for three seconds for two eyeblinks, whenever the application encounters two consecutive jaw clenches, the wordsin the phrase region are sent to TTS, and the synthesized speech is presented. Anillustrative flow chart for the mentioned process is shown in the Figure 4.5. This processrepeats infinitely until the application exits.
Summary
In this chapter, we proposed two BCIs for people with speech or motor impediments.First, uses eyeblinks in the EEG signal, and whereas the second uses eyeblinks andjaw movement in the EEG signal to provide a series of English words that are sequen-tially synthesized to generate speech output. It is also evident from the chapter that theartifacts based BCI can be incorporated in various EEG apparatus.55
HAPTER 5Summary and Conclusions
Brain-Computer Interface (BCI) is a remarkable tool that understands brain signals.The system devised in this fashion can be useful for people paralyzed with spinal cordinjury or Amyotrophic Lateral Sclerosis (ALS). The system enables the managementof a computer or any other electronic device as well as to produce viable means ofcommunication. The thesis deals with the online and offline detection, and classificationof artifacts in Electroencelography (EEG) signals to develop and enhance the resolutionof the new and existing BCI systems.
In the thesis, there are two primal challenges which have been tackled. The first isto detect the artifacts and predict the type of artifacts that appears in the EEG signal.A simple and more effective way to detect the artifacts is employed using a thresholdbased method. It is evident from Section 3.9 that the proposed threshold method is morereliable in terms of detecting the appearance of any artifacts. Once the artifacts havebeen detected, the time warping techniques are used to classify the artifacts into varioustypes based on its nature. The proposed linear and dynamic time warping techniques toclassify the type of artifacts are useful in terms of classification accuracies.The second aim is to utilize the proposed artifacts detection and classification tech-niques to build working, real-time, easy to use BCIs for people with partial or completemotor disabilities. A simple BCI with threshold-based eye blink detection was devel-oped and reported with the aid of a single electrode EEG headset. Further, as an ex-tension of this work, an efficient time warping classification based BCI was developedwith a four electrodes EEG headset. .2 Criticism • In chapter 3, in contrast to the conventional method of data split, which is 80% ofdata for training or reference and 20% of data for testing, the data split was 50%for training or reference and 50% for testing. There are two reasons for this. Thefirst rationale is that the time warping techniques are of time complexity O ( n ) for each reference, increasing the reference data would slow down the estimationof time warping distances. The second is that we used in the chapter is distance-based k -Nearest Neighbour classifier. k -Nearest Neighbour classifier needs allthe reference data in the memory, which also leads to memory constraints. • How does it quantify to designate artifacts based speller as a Brain-Computer In-terface (BCI), as artifacts are originated from noncerebral areas? The people withAmyotrophic lateral sclerosis (ALS) have partial or limited muscular movements.Furthermore, the last set of organs to paralyze is the facial muscles. So, it is sen-sible to augment the existing and proposing BCIs with the aid of the artifacts. • Since the thesis work is on minimal muscular movement or artifacts based BCI,why don’t we use a camcorder to record the streaming video to detect and clas-sify the artifacts? Artifacts like eye blinks are shorter in nature, hardly 500 ms.The data acquisition system used in the proposed model uses a sampling rate of512 Hz, whereas the framerate of a camcorder is relatively way less. Moreover,artifacts impact the EEG signal to a more significant extent; so, it is comparativelysimple to detect and classify the artifacts in the EEG signal. • In chapter 4, the proposed BCIs uses artifacts detection and classification algo-rithm. However, the proposed algorithm assumes that no two artifacts co-occur.What happens to the proposed BCIs, if subject clenches the jaw and blink the eyesimultaneously? The subjects are partially paralyzed and posses limited muscularmovement and use the proposed BCIs to communicate to the outer world. Hav-ing said that, the subject seldom taunts the proposed models with simultaneousartifacts. • Dynamic Time Warping (DTW) is a conventional technique in signal process-ing and sequential pattern analysis. Moreover, DTW is a non-parametric pat-tern recognization methodology. Why can’t we extend the proposed models with58arametric techniques such as Hidden Markov Model (HMM) or connectionistmodels such as a Recurrent Neural Network (RNN)? From Section 3.4, it is clearthat the dimension of each trial is 128x750, and from the Table 3.1, it is appar-ent that we have 206 trials in total. Given the dimension and size of the trial,it is implausible to build a parametric model, as it leads to the curse of dimen-sionality. However, the counter-argument can be reducing the dimension of thetrials. We have a collection of dimension reduction techniques; Principal Compo-nent Analysis (PCA) from statistics and Autoencoder from connectionist models.PCA demands the covariance matrix, which inturns require a substantial numberof trials. Autoencoder necessitates training of the weight parameters, and we lacka considerable number of trials. • In the thesis, it is evident that the threshold and the time-warping based techniquesare robust for classifying eye blinks, head turn, head nod, and jaw movement.However, the study can be extended to various other facial muscle movements. • An extensive study on classification on various other artifacts can be aided inbuilding a more sophisticated BCI. • In the thesis, the BCIs developed work robustly in English, whereas the interfacecan be extended to various other regional languages across the globe.59
EFERENCES
1. (2012). Attempts to convert Prof. Hawking’s brainwaves into speech. URL . [Online; accessed30-December-2019].2. (2018). Electrical geodesics, inc. URL . [Online; ac-cessed 30-December-2019].3. (2018). Muse by interaxon inc. URL https://choosemuse.com/ . [Online;accessed 30-December-2019].4.
Akila, A. and
E. Chandra (2013). Slope finder - a distance measure for DTW basedisolated word speech recognition.
International Journal of Engineering and ComputerScience , (12), 3411–3417.5. Allison, B. , T. Luth , D. Valbuena , A. Teymourian , I. Volosyak , and
A. Graser (2010). BCI demographics: How many (and what kinds of) people can use an SSVEPBCI?
IEEE transactions on neural systems and rehabilitation engineering , (2), 107–116. URL https://doi.org/10.1109/TNSRE.2009.2039495 .6. Anderer, P. , S. Roberts , A. Schlögl , G. Gruber , G. Klösch , W. Herrmann , P. Rap-pelsberger , O. Filz , M. J. Barbanoj , G. Dorffner , et al. (1999). Artifact processing incomputerized analysis of sleep EEG–a review. Neuropsychobiology , (3), 150–157.URL https://doi.org/10.1159/000026613 .7. Anderson, C. W. , S. V. Devulapalli , and
E. A. Stolz (1995). Determining mentalstate from EEG signals using parallel implementations of neural networks.
Scien-tific programming , (3), 171–183. URL http://dx.doi.org/10.1155/1995/603414 .8. Berndt, D. J. and
J. Clifford (1994). Using dynamic time warping to find patterns intime series.
Proceedings of the 3rd International Conference on Knowledge Discov-ery and Data Mining , (16), 359–370. URL https://dl.acm.org/doi/10.5555/3000850.3000887 .9. Birbaumer, N. and
L. G. Cohen (2007). Brain–computer interfaces: communicationand restoration of movement in paralysis.
The Journal of physiology , (3), 621–636.URL https://doi.org/10.1113/jphysiol.2006.125633 .10. Birbaumer, N. , N. Ghanayim , T. Hinterberger , I. Iversen , B. Kotchoubey , A. Kübler , J. Perelmouter , E. Taub , and
H. Flor (1999). A spelling device forthe paralysed.
Nature , (6725), 297–298. URL https://doi.org/10.1038/18581 .11. Blankertz, B. , C. Sannelli , S. Halder , E. M. Hammer , A. Kübler , K.-R. Müller , G. Curio , and
T. Dickhaus (2010 a ). Neurophysiological predictor of SMR-basedBCI performance. Neuroimage , (4), 1303–1309. URL https://doi.org/10.1016/j.neuroimage.2010.03.022 .612. Blankertz, B. , M. Tangermann , C. Vidaurre , S. Fazli , C. Sannelli , S. Haufe , C. Maeder , L. E. Ramsey , I. Sturm , G. Curio , et al. (2010 b ). The berlin brain–computer interface: non-medical uses of BCI technology. Frontiers in neuroscience , ,198. URL https://doi.org/10.3389/fnins.2010.00198 .13. Blondet, M. V. R. , A. Badarinath , C. Khanna , and
Z. Jin (2013). A wearable real-time BCI system based on mobile cloud computing. , 739–742. URL https://doi.org/10.1109/NER.2013.6696040 .14.
Brants, T. and
A. Franz (2006). Web 1T 5-gram version 1. URL https://catalog.ldc.upenn.edu/LDC2006T13 . [Online; accessed 30-December-2019].15.
Brittenham, D. (1974). Recognition and reduction of physiological artifacts.
Ameri-can Journal of EEG Technology , (2-3), 158–165. URL https://doi.org/10.1080/00029238.1974.11081001 .16. Brunner, D. , R. Vasko , C. Detka , J. Monahan , C. Reynolds III , and
D. Kupfer (1996). Muscle artifacts in the sleep EEG: Automated detection and effect on all-nightEEG power spectra.
Journal of sleep research , (3), 155–164. URL https://doi.org/10.1046/j.1365-2869.1996.00009.x .17. Castellanos, N. P. and
V. A. Makarov (2006). Recovering EEG brain signals: arti-fact suppression with wavelet enhanced independent component analysis.
Journal ofneuroscience methods , (2), 300–312. URL https://doi.org/10.1016/j.jneumeth.2006.05.033 .18. Charcot, J. (1874). De la sclérose amyotrophique latérale.
Prog Med , , 325–327.19. Citi, L. , R. Poli , C. Cinel , and
F. Sepulveda (2008). P300-based BCI mouse withgenetically-optimized analogue control.
IEEE transactions on neural systems andrehabilitation engineering , (1), 51–61. URL https://doi.org/10.1109/TNSRE.2007.913184 .20. Daly, J. J. and
J. R. Wolpaw (2008). Brain–computer interfaces in neurological re-habilitation.
The Lancet Neurology , (11), 1032–1043. URL https://doi.org/10.1016/S1474-4422(08)70223-0 .21. Delorme, A. , T. Sejnowski , and
S. Makeig (2007). Enhanced detection of artifacts inEEG data using higher-order statistics and independent component analysis.
Neuroim-age , (4), 1443–1449. URL https://doi.org/10.1016/j.neuroimage.2006.11.004 .22. Dornhege, G. , J. d. R. Millan , T. Hinterberger , D. J. McFarland , and
K.-R. Müller , Toward brain-computer interfacing . MIT press, 2007. Chapter 1–3.23.
Electrical Geodesics, I. E. (2018). Geodesic EEG systems. URL . [Online; accessed 30-December-2019].24.
Escolano, C. , A. R. Murguialday , T. Matuz , N. Birbaumer , and
J. Minguez (2010).A telepresence robotic system operated with a P300-based brain-computer interface:initial tests with ALS patients. ngineering in Medicine and Biology , 4476–4480. URL https://doi.org/10.1109/IEMBS.2010.5626045 .25. Farwell, L. A. and
E. Donchin (1988). Talking off the top of your head: toward amental prosthesis utilizing event-related brain potentials.
Electroencephalography andclinical Neurophysiology , (6), 510–523. URL https://doi.org/10.1016/0013-4694(88)90149-6 .26. Galán, F. , M. Nuttin , E. Lew , P. W. Ferrez , G. Vanacker , J. Philips , and
J. d. R.Millán (2008). A brain-actuated wheelchair: asynchronous and non-invasive brain–computer interfaces for continuous control of robots.
Clinical neurophysiology , (9),2159–2169. URL https://doi.org/10.1016/j.clinph.2008.06.001 .27. Goncharova, I. I. , D. J. McFarland , T. M. Vaughan , and
J. R. Wolpaw (2003).EMG contamination of EEG: spectral and topographical characteristics.
Clini-cal neurophysiology , (9), 1580–1593. URL https://doi.org/10.1016/s1388-2457(03)00093-2 .28. Google (2018). Google android developer guides. URL https://developer.android.com/guide/index.html . [Online; accessed 30-December-2019].29.
Graimann, B. , B. Z. Allison , and
G. Pfurtscheller , Brain-computer interfaces: Rev-olutionizing human-computer interaction . Springer Science & Business Media, 2010.Chapter 1–4, 9.30.
Haufe, S. , M. S. Treder , M. F. Gugler , M. Sagebaum , G. Curio , and
B. Blankertz (2011). EEG potentials predict upcoming emergency brakings during simulated driving.
Journal of neural engineering , (5), 1–11. URL https://doi.org/10.1088/1741-2560/8/5/056001 .31. InteraXon (2019). Muse. URL https://choosemuse.com/ . [Online; accessed30-December-2019].32.
Inuso, G. , F. La Foresta , N. Mammone , and
F. C. Morabito (2007). Brain activityinvestigation by EEG processing: wavelet analysis, kurtosis and renyi’s entropy forartifact detection.
ICIA’07. International Conference On Information Acquisition , 195–200. URL https://doi.org/10.1109/ICIA.2007.4295725 .33.
Jasper, H. H. and
H. L. Andrews (1938). Brain potentials and voluntary muscle ac-tivity in man.
Journal of Neurophysiology , (2), 87–100. URL https://doi.org/10.1152/jn.1938.1.2.87 .34. Jiang, J.-A. , C.-F. Chao , M.-J. Chiu , R.-G. Lee , C.-L. Tseng , and
R. Lin (2007).An automatic analysis method for detecting and eliminating ECG artifacts in EEG.
Computers in biology and medicine , (11), 1660–1671. URL https://doi.org/10.1016/j.compbiomed.2007.03.007 .35. Joyce, C. A. , I. F. Gorodnitsky , and
M. Kutas (2004). Automatic removal ofeye movement and blink artifacts from EEG data using blind component separa-tion.
Psychophysiology , (2), 313–325. URL https://doi.org/10.1111/j.1469-8986.2003.00141.x . 636. Jung, T.-P. , C. Humphries , T.-W. Lee , S. Makeig , M. J. McKeown , V. Iragui , and
T. J. Sejnowski (1998). Extended ICA removes artifacts from electroencephalographicrecordings.
Advances in neural information processing systems , 894–900.37.
Kane, N. , J. Acharya , S. Benickzy , L. Caboclo , S. Finnigan , P. W. Kaplan , H. Shibasaki , R. Pressler , and
M. J. van Putten (2017). A revised glossary of termsmost commonly used by clinical electroencephalographers and updated proposal for thereport format of the EEG findings. Revision 2017.
Clinical neurophysiology practice , , 170–185. URL https://doi.org/10.1016/j.cnp.2017.07.002 .38. Keogh, E. J. and
M. J. Pazzani (2001). Derivative dynamic time warping.
Proceedingsof the 2001 SIAM international conference on data mining , 1–11. URL https://doi.org/10.1137/1.9781611972719.1 .39.
Klass, D. W. (1995). The continuing challenge of artifacts in the EEG.
AmericanJournal of EEG Technology , (4), 239–269. URL https://doi.org/10.1080/00029238.1995.11080524 .40. Klobassa, D. S. , T. M. Vaughan , P. Brunner , N. Schwartz , J. R. Wolpaw , C. Neuper ,and
E. Sellers (2009). Toward a high-throughput auditory p300-based brain–computerinterface.
Clinical Neurophysiology , (7), 1252–1261. URL https://doi.org/10.1016/j.clinph.2009.04.019 .41. Kohlmorgen, J. , G. Dornhege , M. Braun , B. Blankertz , K.-R. Müller , G. Cu-rio , K. Hagemann , A. Bruns , M. Schrauf , W. Kincses , et al. (2007). Improv-ing human performance in a real operating environment through real-time mentalworkload detection. Toward Brain-Computer Interfacing , , 409–422. URL https://ieeexplore.ieee.org/servlet/opac?bknumber=6267251 .42. Krepki, R. , B. Blankertz , G. Curio , and
K.-R. Müller (2007). The berlin brain-computer interface (BBCI)–towards a new communication channel for online control ingaming applications.
Multimedia Tools and Applications , (1), 73–90. URL https://doi.org/10.1007/s11042-006-0094-3 .43. LeVan, P. , E. Urrestarazu , and
J. Gotman (2006). A system for automatic artifactremoval in ictal scalp EEG based on independent component analysis and bayesianclassification.
Clinical Neurophysiology , (4), 912–927. URL https://doi.org/10.1016/j.clinph.2005.12.013 .44. Lin, C.-T. , C.-S. Huang , W.-Y. Yang , A. K. Singh , C.-H. Chuang , and
Y.-K. Wang (2018). Real-time EEG signal enhancement using canonical correlation analysis andgaussian mixture clustering.
Journal of healthcare engineering , , 1–11. URL https://doi.org/10.1155/2018/5081258 .45. Lin, C.-T. , L.-D. Liao , Y.-H. Liu , I.-J. Wang , B.-S. Lin , and
J.-Y. Chang (2010).Novel dry polymer foam electrodes for long-term EEG measurement.
IEEE Transac-tions on Biomedical Engineering , (5), 1200–1207. URL https://doi.org/10.1109/TBME.2010.2102353 .46. Luck, S. J. (2005). Ten simple rules for designing ERP experiments.
Event-relatedpotentials: A methods handbook , 17–32. 647.
Ma, W. , D. Tran , T. Le , H. Lin , and
S.-M. Zhou (2014). Using EEG artifacts for BCIapplications.
International Joint Conference on Neural Networks (IJCNN) , 3628–3635.URL https://doi.org/10.1109/IJCNN.2014.6889496 .48.
Mak, J. N. and
J. R. Wolpaw (2009). Clinical applications of brain-computer inter-faces: current state and future prospects.
IEEE reviews in biomedical engineering , ,187–199. URL https://doi.org/10.1109/RBME.2009.2035356 .49. Mammone, N. , F. La Foresta , and
F. C. Morabito (2012). Automatic artifact rejectionfrom multichannel scalp EEG by wavelet ICA.
IEEE Sensors Journal , (3), 533–542.URL https://doi.org/10.1109/JSEN.2011.2115236 .50. Maruthachalam, S. , S. Aggarwal , M. G. Kumar , M. Sur , and
H. A. Murthy (2018). Brain-computer interface using electroencephalogram signatures of eye blinks.
Proc. Interspeech 2018 , 1059–1060. URL .51.
McFarland, D. J. , W. A. Sarnacki , T. M. Vaughan , and
J. R. Wolpaw (2005). Brain-computer interface (BCI) operation: signal and noise during early training sessions.
Clinical Neurophysiology , (1), 56–62. URL https://doi.org/10.1016/j.clinph.2004.07.004 .52. Millett, D. (2001). Hans berger: From psychic energy to the EEG.
Perspectives inbiology and medicine , (4), 522–542. URL https://doi.org/10.1353/pbm.2001.0070 .53. Müller, K.-R. , M. Tangermann , G. Dornhege , M. Krauledat , G. Curio , and
B. Blankertz (2008). Machine learning for real-time single-trial EEG-analysis: frombrain–computer interfacing to mental state monitoring.
Journal of neuroscience meth-ods , (1), 82–90. URL https://doi.org/10.1016/j.jneumeth.2007.09.022 .54. Muller-Putz, G. R. and
G. Pfurtscheller (2007). Control of an electrical prosthesiswith an SSVEP-based BCI.
IEEE Transactions on Biomedical Engineering , (1),361–364. URL https://doi.org/10.1109/TBME.2007.897815 .55. Muller-Putz, G. R. , R. Scherer , C. Neuper , and
G. Pfurtscheller (2006). Steady-state somatosensory evoked potentials: suitable brain signals for brain-computer inter-faces?
IEEE transactions on neural systems and rehabilitation engineering , (1),30–37. URL https://doi.org/10.1109/TNSRE.2005.863842 .56. NeuroSky (2015). Mindwave mobile: User guide. URL http://download.neurosky.com/support_page_files/MindWaveMobile/docs/mindwave_mobile_user_guide.pdf . [Online; accessed 30-December-2019].57.
NeuroSky (2019). Mindwave mobile. URL https://store.neurosky.com/pages/mindwave . [Online; accessed 30-December-2019].58.
Nicolas-Alonso, L. F. and
J. Gomez-Gil (2012). Brain computer interfaces, a review. sensors , (2), 1211–1279. URL https://doi.org/10.3390/s120201211 .59. Nijholt, A. , D. P.-O. Bos , and
B. Reuderink (2009). Turning shortcomings into chal-lenges: Brain–computer interfaces for games.
Entertainment computing , (2), 85–94.URL https://doi.org/10.1016/j.entcom.2009.09.007 .650. Nolan, H. , R. Whelan , and
R. Reilly (2010). FASTER: fully automated statisticalthresholding for EEG artifact rejection.
Journal of neuroscience methods , (1), 152–162. URL https://doi.org/10.1016/j.jneumeth.2010.07.015 .61. Pfurtscheller, G. and
F. L. Da Silva (1999). Event-related EEG/MEG synchronizationand desynchronization: basic principles.
Clinical neurophysiology , (11), 1842–1857. URL https://doi.org/10.1016/s1388-2457(99)00141-8 .62. Pfurtscheller, G. and
C. Neuper (1997). Motor imagery activates primary sen-sorimotor area in humans.
Neuroscience letters , (2-3), 65–68. URL https://doi.org/10.1016/s0304-3940(97)00889-6 .63. Philips (2019). Geodesic EEG system (GES). URL . [Online; accessed 30-December-2019].64.
Porbadnigk, A. K. , J.-N. Antons , B. Blankertz , M. S. Treder , R. Schleicher , S. Möller , and
G. Curio (2010). Using ERPs for assessing the (sub) conscious per-ception of noise. , 2690–2693.65.
Porbadnigk, A. K. , S. Scholler , B. Blankertz , A. Ritz , M. Born , R. Scholl , K.-R.Müller , G. Curio , and
M. S. Treder (2011). Revealing the neural response to im-perceptible peripheral flicker with machine learning. , 3692–3695. URL https://doi.org/10.1109/IEMBS.2011.6090625 .66.
Ratanamahatana, C. A. and
E. Keogh (2004). Everything you know aboutdynamic time warping is wrong.
Third workshop on mining temporal and se-quential data , , 1–11. URL .67. Ribas, G. C. (2010). The cerebral sulci and gyri.
Neurosurgical focus , (2), 1–24.URL https://doi.org/10.3171/2009.11.FOCUS09245 .68. Rohál’ová, M. , P. Sykacek , M. Koskaand , and
G. Dorffner (2001). Detection ofthe EEG artifacts by the means of the (extended) kalman filter.
Measurement ScienceReview , (1), 59–62.69. Sakoe, H. and
S. Chiba (1978). Dynamic programming algorithm optimization for spo-ken word recognition.
IEEE transactions on acoustics, speech, and signal processing , (1), 43–49. URL https://doi.org/10.1109/TASSP.1978.1163055 .70. Salvador, S. and
P. Chan (2007). Toward accurate dynamic time warping in lineartime and space.
Intelligent Data Analysis , (5), 561–580. URL https://doi.org/10.3233/IDA-2007-11508 .71. Schacter, D. L. (1977). EEG theta waves and psychological phenomena: A reviewand analysis.
Biological psychology , (1), 47–82. URL https://doi.org/10.1016/0301-0511(77)90028-X .72. Schomer, D. L. and
F. L. Da Silva , Niedermeyer’s electroencephalography: basic prin-ciples, clinical applications, and related fields . Oxford University Press, 2017. URL https://doi.org/10.1093/med/9780190228484.001.0001 .663.
Schreuder, M. , B. Blankertz , and
M. Tangermann (2010). A new auditory multi-classbrain-computer interface paradigm: spatial hearing as an informative cue.
PloS one , (4), 1–14. URL https://doi.org/10.1371/journal.pone.0009813 .74. Schubert, R. , M. Tangermann , S. Haufe , C. Sannelli , M. Simon , E. Schmidt , W. Kincses , and
G. Curio (2008). Parieto-occipital alpha power indexes distractionduring simulated car driving.
International journal of psychophysiology , (3), 214.URL https://doi.org/10.1016/j.ijpsycho.2008.05.033 .75. Senin, P. (2008). Dynamic time warping algorithm review.
Information and Com-puter Science Department University of Hawaii at Manoa Honolulu, USA , (1-23), 40. URL http://seninp.github.io/assets/pubs/senin_dtw_litreview_2008.pdf .76. Sörnmo, L. and
P. Laguna , Bioelectrical signal processing in cardiac and neurologicalapplications , volume 8. Academic Press, 2005. Chapter 5.77.
Sweeney, K. , S. F. McLoone , and
T. E. Ward (2010). A simple bio-signals qualitymeasure for in-home monitoring.
Proceedings of the 7th IASTED International Confer-ence , 96–102. URL https://doi.org/10.2316/j.2010.216.680-0113 .78.
Tangermann, M. W. , M. Krauledat , K. Grzeska , M. Sagebaum , C. Vidaurre , B. Blankertz , and
K.-R. Müller (2008). Playing pinball with non-invasive BCI.
Proceedings of the 21st International Conference on Neural Information ProcessingSystems , 1641–1648. URL https://dl.acm.org/doi/10.5555/2981780.2981985 .79.
Tatum, W. O. , B. A. Dworetzky , and
D. L. Schomer (2011). Artifact and recordingconcepts in EEG.
Journal of clinical neurophysiology , (3), 252–263. URL https://doi.org/10.1097/WNP.0b013e31821c3c93 .80. Tong, S. , A. Bezerianos , J. Paul , Y. Zhu , and
N. Thakor (2001). Removal of ECGinterference from the EEG recordings in small animals using independent componentanalysis.
Journal of neuroscience methods , (1), 11–17. URL https://doi.org/10.1016/S0165-0270(01)00366-1 .81. Treder, M. S. and
B. Blankertz (2010). (c)overt attention and visual speller design inan ERP-based brain-computer interface.
Behavioral and brain functions , (1), 1–13.URL https://doi.org/10.1186/1744-9081-6-28 .82. Venthur, B. , B. Blankertz , M. F. Gugler , and
G. Curio (2010). Novel applications ofBCI technology: psychophysiological optimization of working conditions in industry. , 417–421. URL https://doi.org/10.1109/ICSMC.2010.5641772 .83.
Vidal, J. J. (1973). Toward direct brain-computer communication.
Annual reviewof Biophysics and Bioengineering , (1), 157–180. URL https://doi.org/10.1146/annurev.bb.02.060173.001105 .84. Walter, W. G. (1953). The living brain. Chapter 1–2.85.
Wang, G. , C. Teng , K. Li , Z. Zhang , and
X. Yan (2015). The removal of EOG artifactsfrom EEG signals using independent component analysis and multivariate empiricalmode decomposition.
IEEE journal of biomedical and health informatics , (5), 1301–1308. URL https://doi.org/10.1109/JBHI.2015.2450196 .676. Whitton, J. L. , F. Lue , and
H. Moldofsky (1978). A spectral method for removing eyemovement artifacts from the EEG.
Electroencephalography and clinical neurophys-iology , (6), 735–741. URL https://doi.org/10.1016/0013-4694(78)90208-0 .87. Winkler, I. , S. Haufe , and
M. Tangermann (2011). Automatic classification of ar-tifactual ICA-components for artifact removal in EEG signals.
Behavioral and BrainFunctions , (1), 1–15. URL https://doi.org/10.1186/1744-9081-7-30 .88. Wolpaw, J. and
E. W. Wolpaw , Brain-computer interfaces: principles and practice .Oxford University Press, 2012. URL https://doi.org/10.1093/acprof:oso/9780195388855.001.0001 . Chapter 1–6.89.
Woodman, G. F. (2010). A brief introduction to the use of event-related potentialsin studies of perception and attention.
Attention, Perception, & Psychophysics , (8),2031–2046. URL https://doi.org/10.3758/BF03196680 .90. Woolsey, C. N. , T. C. Erickson , and
W. E. Gilson (1979). Localization in somaticsensory and motor areas of human cerebral cortex as determined by direct recording ofevoked potentials and electrical stimulation.
Journal of neurosurgery , (4), 476–506.URL https://doi.org/10.3171/jns.1979.51.4.0476 .91. Zeng, H. and
A. Song (2014). Removal of EOG artifacts from EEG recordings usingstationary subspace analysis.
The Scientific World Journal , , 1–10. URL http://dx.doi.org/10.1155/2014/259121 .92. Zhao, Q. , L. Zhang , and
A. Cichocki (2009). EEG-based asynchronous BCI controlof a car in 3D virtual reality environments.
Chinese Science Bulletin , (1), 78–87.URL https://doi.org/10.1007/s11434-008-0547-3 .93. Zone, S. (2017). Linear time warping. URL . [Online; accessed30-December-2019]. 68
IST OF PAPERS BASED ON THESIS Srihari Maruthachalam , Sidharth Aggarwal, Mari Ganesh Kumar, MrigankaSur, Hema A Murthy
Brain-Computer Interface using ElectroencephalogramSignatures of Eye Blinks.
Interspeech 2018, Hyderabad, India , pp. 1059-1060,2018.2.
Srihari Maruthachalam , Mari Ganesh Kumar, Hema A Murthy
Time WarpingSolutions for Classifying Artifacts in EEG , pp. 4537-4540, 2019. 69 raduate Test Committee
Chairperson :
Prof. N S NarayanaswamyDepartment of Computer Science and EngineeringIndian Institute of Technology, Madras
Guide :
Prof. Hema A. MurthyDepartment of Computer Science and EngineeringIndian Institute of Technology, Madras