Bonkon Koo
Pohang University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bonkon Koo.
Journal of Neuroscience Methods | 2015
Bonkon Koo; Hwan-Gon Lee; Yunjun Nam; Hyohyeong Kang; Chin Su Koh; Hyung-Cheul Shin; Seungjin Choi
BACKGROUND For a self-paced motor imagery based brain-computer interface (BCI), the system should be able to recognize the occurrence of a motor imagery, as well as the type of the motor imagery. However, because of the difficulty of detecting the occurrence of a motor imagery, general motor imagery based BCI studies have been focusing on the cued motor imagery paradigm. NEW METHOD In this paper, we present a novel hybrid BCI system that uses near infrared spectroscopy (NIRS) and electroencephalography (EEG) systems together to achieve online self-paced motor imagery based BCI. We designed a unique sensor frame that records NIRS and EEG simultaneously for the realization of our system. Based on this hybrid system, we proposed a novel analysis method that detects the occurrence of a motor imagery with the NIRS system, and classifies its type with the EEG system. RESULTS An online experiment demonstrated that our hybrid system had a true positive rate of about 88%, a false positive rate of 7% with an average response time of 10.36 s. COMPARISON WITH EXISTING METHOD(S) As far as we know, there is no report that explored hemodynamic brain switch for self-paced motor imagery based BCI with hybrid EEG and NIRS system. CONCLUSIONS From our experimental results, our hybrid system showed enough reliability for using in a practical self-paced motor imagery based BCI.
IEEE Transactions on Biomedical Engineering | 2014
Yunjun Nam; Bonkon Koo; Andrzej Cichocki; Seungjin Choi
We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.
international conference of the ieee engineering in medicine and biology society | 2015
Bonkon Koo; Hwan-Gon Lee; Yunjun Nam; Seungjin Choi
In this paper we present an immersive brain computer interface (BCI) where we use a virtual reality head-mounted display (VRHMD) to invoke SSVEP responses. Compared to visual stimuli in monitor display, we demonstrate that visual stimuli in VRHMD indeed improve the user engagement for BCI. To this end, we validate our method with experiments on a VR maze game, the goal of which is to guide a ball into the destination in a 2D grid map in a 3D space, successively choosing one of four neighboring cells using SSVEP evoked by visual stimuli on neighboring cells. Experiments indicate that the averaged information transfer rate is improved by 10% for VRHMD, compared to the case in monitor display and the users feel easier to play the game with the proposed system.In this paper we present an immersive brain computer interface (BCI) where we use a virtual reality head-mounted display (VRHMD) to invoke SSVEP responses. Compared to visual stimuli in monitor display, we demonstrate that visual stimuli in VRHMD indeed improve the user engagement for BCI. To this end, we validate our method with experiments on a VR maze game, the goal of which is to guide a ball into the destination in a 2D grid map in a 3D space, successively choosing one of four neighboring cells using SSVEP evoked by visual stimuli on neighboring cells. Experiments indicate that the averaged information transfer rate is improved by 10% for VRHMD, compared to the case in monitor display and the users feel easier to play the game with the proposed system.
IEEE Systems, Man, and Cybernetics Magazine | 2016
Yunjun Nam; Bonkon Koo; Andrzej Cichocki; Seungjin Choi
Glossokinetic potentials (GKPs) refer to electrical responses involving tongue movements that are measured at electrodes placed on the scalp when the tip of the tongue touches tissue inside the mouth. GKP has been considered an electroencephalography (EEG) artifact that is removed to minimize the interference with signals from a cerebral region for successful EEG analysis. In this article, we emphasize a different side of GKP where we analyze its spatial patterns to trace tongue movements developing tongue-machine interfaces. We begin with a brief overview of GKP and its spatial patterns and then describe its potential applications to man-machine interfaces. First, we describe the spatial pattern of GKP for horizontal tongue movements and explain how it can be utilized to identify the position of the tongue. We also introduce a tongue-rudder system where this technique enables smooth control of an electric wheelchair. Then we describe GKP patterns for vertical and frontal tongue movements that are closely related to speech production. Based on this pattern, we discuss its application to silent speech recognition, which allows speech communication without using a sound.
The 3rd International Winter Conference on Brain-Computer Interface | 2015
Bonkon Koo; Seungjin Choi
Brain computer interfaces (BCIs) which exploit steady state visually evoked potential (SSVEP) have advanced by using various display devices. One of the most recent type of display devices is virtual reality head-mounted display (VRHMD). In this paper, we investigate SSVEP BCI via VRHMD. In order to investigate the system, we found feasible stimulation frequencies by testing six stimulation frequencies which can be generated by using Oculus Rift, one of off-the-shelf VRHMD devices. In order to evaluate SSVEP detection performance dependent on stimulation frequencies, we employed canonical correlation analysis and minimum energy combination methods. According to our experimental results, the system using only low stimulation frequencies, less than 10 Hz, showed higher SSVEP detection performance than those using low and high stimulation frequencies together. The results prove that VRHMDs can be a reliable display device for a SSVEP BCI, when the stimulation frequencies are less than 10 Hz.
systems, man and cybernetics | 2014
Yunjun Nam; Bonkon Koo; Seungjin Choi
Glossokinetic potential (GKP) is the potential response generated from tongue movements. The tongue is one of main articulators determining the sound of spoken language, and tongue-oriented GKP can severely interfere electroencephalography (EEG) signals involved in language-related tasks. To clarify the relation between GKP and language, and provide information such that what kinds of phonemes evoke GKP response and where the response can be observed, we investigate GKP responses from various phonemes. In detail, we record EEG signals when the tongue touches each place of articulation (the reference points used for the categorization of consonants in phonetics), then analyze their spatial and scale patterns. The results showed pronouncing dental, palato-alveolar, and palatal consonants evokes potential decrease in the frontal region, and potential increase in the occipital region. On the other hand, pronouncing retroflex consonant evokes potential increase in the frontal region, and decrease in the occipital region. We believe that the surveyed GKP patterns will be useful for developing artifact removal techniques eliminating language-related artifacts. The GKP removal could be beneficial for neuroscience in language processing, implementing brain-computer interface in real-world condition, and developing a novel silent speech recognition technique.
systems, man and cybernetics | 2016
Hanh Vu; Bonkon Koo; Seungjin Choi
Canonical correlation analysis (CCA) has been successfully used for extracting frequency components of steady-state visual evoked potential (SSVEP) in electroencephalography (EEG). Recently, a few efforts on CCA-based SSVEP methods have been made to demonstrate the benefits for brain computer interface (BCI). Most of these methods are limited to linear CCA. In this paper consider a deep extension of CCA where input data are processed through multiple layers before their correlations are computed. To our best knowledge, it is the first time to apply deep CCA (DCCA) to the task of frequency component extraction in SSVEP. Our empirical study demonstrates that DCCA extracts more robust feature, which has significantly higher signal to noise ratio (SNR) compared to those of CCA, and it results in better performance in classification with the averaged accuracy of 92%.
PLOS ONE | 2018
InSeok Seo; Hwan-Gon Lee; Bonkon Koo; Chin Su Koh; Hae-Yong Park; Changkyun Im; Hyung-Cheul Shin
Although several studies have been performed to detect cancer using canine olfaction, none have investigated whether canine olfaction trained to the specific odor of one cancer is able to detect odor related to other unfamiliar cancers. To resolve this issue, we employed breast and colorectal cancer in vitro, and investigated whether trained dogs to odor related to metabolic waste from breast cancer are able to detect it from colorectal cancer, and vice versa. The culture liquid samples used in the cultivation of cancerous cells (4T1 and CT26) were employed as an experimental group. Two different breeds of dogs were trained for the different cancer odor each other. The dogs were then tested using a double-blind method and cross-test to determine whether they could correctly detect the experimental group, which contains the specific odor for metabolic waste of familiar or unfamiliar cancer. For two cancers, both dogs regardless of whether training or non-training showed that accuracy was over 90%, and sensitivity and specificity were over 0.9, respectively. Through these results, it was verified that the superior olfactory ability of dogs can discriminate odor for metabolic waste of cancer cells from it of benign cells, and that the specific odor for metabolic waste of breast cancer has not significant differences to it of colorectal cancer. That is, it testifies that metabolic waste between breast and colorectal cancer have the common specific odor in vitro. Accordingly, a trained dogs for detecting odor for metabolic waste of breast cancer can perceive it of colorectal cancer, and vice versa. In order to the future work, we will plan in vivo experiment for the two cancers and suggest research as to what kind of cancers have the common specific odor. Furthermore, the relationship between breast and colorectal cancer should be investigated using other research methods.
Scientific Reports | 2017
Bonkon Koo; Chin Su Koh; Hae Yong Park; Hwan Gon Lee; Jin Woo Chang; Seungjin Choi; Hyung Cheul Shin
Here, we report that the development of a brain-to-brain interface (BBI) system that enables a human user to manipulate rat movement without any previous training. In our model, the remotely-guided rats (known as ratbots) successfully navigated a T-maze via contralateral turning behaviour induced by electrical stimulation of the nigrostriatal (NS) pathway by a brain- computer interface (BCI) based on the human controller’s steady-state visually evoked potentials (SSVEPs). The system allowed human participants to manipulate rat movement with an average success rate of 82.2% and at an average rat speed of approximately 1.9 m/min. The ratbots had no directional preference, showing average success rates of 81.1% and 83.3% for the left- and right-turning task, respectively. This is the first study to demonstrate the use of NS stimulation for developing a highly stable ratbot that does not require previous training, and is the first instance of a training-free BBI for rat navigation. The results of this study will facilitate the development of borderless communication between human and untrained animals, which could not only improve the understanding of animals in humans, but also allow untrained animals to more effectively provide humans with information obtained with their superior perception.
Scientific Reports | 2017
Bonkon Koo; Chin Su Koh; Hae-Yong Park; Hwan-Gon Lee; Jin Woo Chang; Seungjin Choi; Hyung-Cheul Shin
A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper.