Yunjun Nam
Pohang University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yunjun Nam.
IEEE Signal Processing Letters | 2009
Hyohyeong Kang; Yunjun Nam; Seungjin Choi
Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.
Journal of Neuroscience Methods | 2015
Bonkon Koo; Hwan-Gon Lee; Yunjun Nam; Hyohyeong Kang; Chin Su Koh; Hyung-Cheul Shin; Seungjin Choi
BACKGROUND For a self-paced motor imagery based brain-computer interface (BCI), the system should be able to recognize the occurrence of a motor imagery, as well as the type of the motor imagery. However, because of the difficulty of detecting the occurrence of a motor imagery, general motor imagery based BCI studies have been focusing on the cued motor imagery paradigm. NEW METHOD In this paper, we present a novel hybrid BCI system that uses near infrared spectroscopy (NIRS) and electroencephalography (EEG) systems together to achieve online self-paced motor imagery based BCI. We designed a unique sensor frame that records NIRS and EEG simultaneously for the realization of our system. Based on this hybrid system, we proposed a novel analysis method that detects the occurrence of a motor imagery with the NIRS system, and classifies its type with the EEG system. RESULTS An online experiment demonstrated that our hybrid system had a true positive rate of about 88%, a false positive rate of 7% with an average response time of 10.36 s. COMPARISON WITH EXISTING METHOD(S) As far as we know, there is no report that explored hemodynamic brain switch for self-paced motor imagery based BCI with hybrid EEG and NIRS system. CONCLUSIONS From our experimental results, our hybrid system showed enough reliability for using in a practical self-paced motor imagery based BCI.
IEEE Transactions on Biomedical Engineering | 2014
Yunjun Nam; Bonkon Koo; Andrzej Cichocki; Seungjin Choi
We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.
IEEE Transactions on Biomedical Engineering | 2012
Yunjun Nam; Qibin Zhao; Andrzej Cichocki; Seungjin Choi
Glossokinetic potentials (GKPs) are electric potential responses generated by tongue movement. In this study, we use these GKPs to automatically detect and estimate tongue positions, and develop a tongue-machine interface. We show that a specific configuration of electrode placement yields discriminative GKPs that vary depending on the direction of the tongue. We develop a linear model to determine the direction of tongue from GKPs, where we seek linear features that are robust to a baseline drift problem by maximizing the ratio of intertask covariance to intersession covariance. We apply our method to the task of wheelchair control, developing a tongue-machine interface for wheelchair control, referred to as tongue-rudder. A teeth clenching detection system, using electromyography, was also implemented in the system in order to assign teeth clenching as the stop command. Experiments on off-line cursor control and online wheelchair control confirm the unique advantages of our method, such as: 1) noninvasiveness, 2) fine controllability, and 3) ability to integrate with other EEG-based interface systems.
international conference of the ieee engineering in medicine and biology society | 2013
Yunjun Nam; Andrzej Cichocki; Seungjin Choi
Steady-state somatosensory evoked potential (SSSEP) is a recently developing brain-computer interface (BCI) paradigm where the brain response to tactile stimulation of a specific frequency is used. Thus far, spatial information was not examined in depth in SSSEP BCI, because frequency information was regarded as the main concern of SSSEP analysis. However, given that the somatosensory cortex areas, each of which correspond to a different body part, are well clustered, we can assume that the spatial information could be beneficial for SSSEP analysis. Based on this assumption, we apply the common spatial pattern (CSP) method, which is the spatial feature extraction method most widely used for the motor imagery BCI paradigm, to SSSEP BCI. Experimental results show that our approach, where two CSP methods are applied to the signal of each frequency band, has a performance improvement from 70% to 75%.
intelligent robots and systems | 2013
Jiaxin Ma; Yu Zhang; Yunjun Nam; Andrzej Cichocki; Fumitoshi Matsuno
Electrooculogram (EOG) signals are potential responses generated by eye movements, and event related potential (ERP) is a special electroencephalogram (EEG) pattern which evoked by external stimuli. Both EOG and ERP have been used separately for implementing human-machine interfaces which can assist disabled patients in performing daily tasks. In this paper, we present a novel EOG/ERP hybrid human-machine interface which integrates the traditional EOG and ERP interfaces together. Eye movements like the blink, wink, gaze, and frown are detected from EOG signals using double threshold algorithm. Multiple ERP components, i.e., N170, VPP and P300 are evoked by inverted face stimuli and classified by linear discriminant analysis (LDA). Based on this hybrid interface, we also design a control scheme for the humanoid robot NAO (Aldebaran robotics, Inc). On-line experiment results show that the proposed hybrid interface can effectively control the robots basic movements and order it to make various behaviors. While normally operating the robot by hands takes 49.1 s to complete the experiment sessions, using the proposed EOG/ERP interface, the subject is able to finish the sessions in 54.1 s.
international conference of the ieee engineering in medicine and biology society | 2015
Bonkon Koo; Hwan-Gon Lee; Yunjun Nam; Seungjin Choi
In this paper we present an immersive brain computer interface (BCI) where we use a virtual reality head-mounted display (VRHMD) to invoke SSVEP responses. Compared to visual stimuli in monitor display, we demonstrate that visual stimuli in VRHMD indeed improve the user engagement for BCI. To this end, we validate our method with experiments on a VR maze game, the goal of which is to guide a ball into the destination in a 2D grid map in a 3D space, successively choosing one of four neighboring cells using SSVEP evoked by visual stimuli on neighboring cells. Experiments indicate that the averaged information transfer rate is improved by 10% for VRHMD, compared to the case in monitor display and the users feel easier to play the game with the proposed system.In this paper we present an immersive brain computer interface (BCI) where we use a virtual reality head-mounted display (VRHMD) to invoke SSVEP responses. Compared to visual stimuli in monitor display, we demonstrate that visual stimuli in VRHMD indeed improve the user engagement for BCI. To this end, we validate our method with experiments on a VR maze game, the goal of which is to guide a ball into the destination in a 2D grid map in a 3D space, successively choosing one of four neighboring cells using SSVEP evoked by visual stimuli on neighboring cells. Experiments indicate that the averaged information transfer rate is improved by 10% for VRHMD, compared to the case in monitor display and the users feel easier to play the game with the proposed system.
IEEE Systems, Man, and Cybernetics Magazine | 2016
Yunjun Nam; Bonkon Koo; Andrzej Cichocki; Seungjin Choi
Glossokinetic potentials (GKPs) refer to electrical responses involving tongue movements that are measured at electrodes placed on the scalp when the tip of the tongue touches tissue inside the mouth. GKP has been considered an electroencephalography (EEG) artifact that is removed to minimize the interference with signals from a cerebral region for successful EEG analysis. In this article, we emphasize a different side of GKP where we analyze its spatial patterns to trace tongue movements developing tongue-machine interfaces. We begin with a brief overview of GKP and its spatial patterns and then describe its potential applications to man-machine interfaces. First, we describe the spatial pattern of GKP for horizontal tongue movements and explain how it can be utilized to identify the position of the tongue. We also introduce a tongue-rudder system where this technique enables smooth control of an electric wheelchair. Then we describe GKP patterns for vertical and frontal tongue movements that are closely related to speech production. Based on this pattern, we discuss its application to silent speech recognition, which allows speech communication without using a sound.
systems, man and cybernetics | 2014
Yunjun Nam; Bonkon Koo; Seungjin Choi
Glossokinetic potential (GKP) is the potential response generated from tongue movements. The tongue is one of main articulators determining the sound of spoken language, and tongue-oriented GKP can severely interfere electroencephalography (EEG) signals involved in language-related tasks. To clarify the relation between GKP and language, and provide information such that what kinds of phonemes evoke GKP response and where the response can be observed, we investigate GKP responses from various phonemes. In detail, we record EEG signals when the tongue touches each place of articulation (the reference points used for the categorization of consonants in phonetics), then analyze their spatial and scale patterns. The results showed pronouncing dental, palato-alveolar, and palatal consonants evokes potential decrease in the frontal region, and potential increase in the occipital region. On the other hand, pronouncing retroflex consonant evokes potential increase in the frontal region, and decrease in the occipital region. We believe that the surveyed GKP patterns will be useful for developing artifact removal techniques eliminating language-related artifacts. The GKP removal could be beneficial for neuroscience in language processing, implementing brain-computer interface in real-world condition, and developing a novel silent speech recognition technique.
systems, man and cybernetics | 2012
Sangmin Lee; Yunjun Nam; Seungjin Choi
The P300 wave refers to a positive peak with a latency of 300 ms, produced in response to task-relevant stimuli. It is a widely-used event related potential (ERP) in practical brain computer interface (BCI) systems, where a classifier is trained using discriminative features extracted from a set of labeled examples, in order to detect the presence of P300. Given a small training examples, the intra- and inter-subject variations in amplitude and latency of P300 degrade the performance of classifier. Thus, it requires a longer training period (calibration time) with more labeled examples for satisfactory performance. In this paper we present a self-labeling method, where confident unlabeled data, together with their predicted labels, are gradually added to the training set, in order to re-train the classifier. Linear discriminant analysis with singular value decomposition is used to progressively extract discriminate features. Experiments demonstrate the high performance of our method, especially in the case where a small number of training examples are available. We also apply the method to the zero-calibration P300-based BCI, which removes subject-dependent calibration procedures by using the training set already recorded from other subjects.