Mincheol Whang
Sangmyung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mincheol Whang.
Neuroscience Letters | 2012
Sungchul Mun; Min-Chul Park; Sangin Park; Mincheol Whang
The purpose of this study was to identify steady-state visually evoked potential (SSVEP) and event-related potential (ERP) correlates of 3D cognitive fatigue. Twenty-one participants (11 females) were subjected to a cognitive test before and after being exposed to a stereoscopic 3D environment. They were categorized into two groups, fatigued and unfatigued, based on their response times and subjective data. The fatigued group exhibited significantly reduced P600 amplitudes and delayed P600 latencies in the post-viewing condition compared to those in the pre-viewing condition. Significant fatigue effects for the fatigued group were also observed at P(4) and O(2) sites during the 8.57 Hz attended task; attend/ignore ratios in this cortical hemisphere after 3D viewing were smaller than those before 3D viewing.
Journal of Neuroscience Methods | 2010
Eui Chul Lee; Jin Cheol Woo; Jong Hwa Kim; Mincheol Whang; Kang Ryoung Park
With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a users concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI.
IEEE Transactions on Consumer Electronics | 2010
Hwan Heo; Eui Chul Lee; Kang Ryoung Park; Chi Jung Kim; Mincheol Whang
This study is to propose a realistic game system using a multi-modal interface, including gaze tracking, hand gesture recognition and bio-signal analysis. Our research is novel in the following four ways, compared to previous game systems. First, a highly immersive and realistic game is implemented on a head mounted display (HMD), with a gaze tracker, a gesture recognizer and a bio-signal analyzer. Second, since the camera module for eye tracking is attached below the HMD, a users gaze position onto the HMD display can be calculated without wearing any additional eye tracking devices. Third, an aiming cursor in the game system is controlled by the gaze tracking. The grabbing and throwing behaviors toward a target are performed by the users hand gestures using a data glove. Finally, the level of difficulty in the game system is adaptively controlled according to the measurement and analysis of a users bio-signals. Experimental results show that the proposed method provides more effect on experience of immersion and interest than conventional device such as a keyboard and or a mouse.
International Journal of Psychophysiology | 2014
Sangin Park; Myoung Ju Won; Sungchul Mun; Eui Chul Lee; Mincheol Whang
Most investigations into the negative effects of viewing stereoscopic 3D content on human health have addressed 3D visual fatigue and visually induced motion sickness (VIMS). Very few, however, have looked into changes in autonomic balance and heart rhythm, which are homeostatic factors that ought to be taken into consideration when assessing the overall impact of 3D video viewing on human health. In this study, 30 participants were randomly assigned to two groups: one group watching a 2D video, (2D-group) and the other watching a 3D video (3D-group). The subjects in the 3D-group showed significantly increased heart rates (HR), indicating arousal, and an increased VLF/HF (Very Low Frequency/High Frequency) ratio (a measure of autonomic balance), compared to those in the 2D-group, indicating that autonomic balance was not stable in the 3D-group. Additionally, a more disordered heart rhythm pattern and increasing heart rate (as determined by the R-peak to R-peak (RR) interval) was observed among subjects in the 3D-group compared to subjects in the 2D-group, further indicating that 3D viewing induces lasting activation of the sympathetic nervous system and interrupts autonomic balance.
Sensors | 2013
Chi Jung Kim; Sangin Park; Myeung Ju Won; Mincheol Whang; Eui Chul Lee
Previous research has indicated that viewing 3D displays may induce greater visual fatigue than viewing 2D displays. Whether viewing 3D displays can evoke measureable emotional responses, however, is uncertain. In the present study, we examined autonomic nervous system responses in subjects viewing 2D or 3D displays. Autonomic responses were quantified in each subject by heart rate, galvanic skin response, and skin temperature. Viewers of both 2D and 3D displays showed strong positive correlations with heart rate, which indicated little differences between groups. In contrast, galvanic skin response and skin temperature showed weak positive correlations with average difference between viewing 2D and 3D. We suggest that galvanic skin response and skin temperature can be used to measure and compare autonomic nervous responses in subjects viewing 2D and 3D displays.
Sensors | 2013
Jong-Suk Choi; Jae Won Bang; Kang Ryoung Park; Mincheol Whang
Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to combine the EEG signals and gaze-tracking. This research is novel in the following four aspects. First, two wearable devices are combined to simultaneously measure both the EEG signal and the gaze position. Second, the speller UI system usually has a 6 × 6 matrix of alphanumeric characters, which has disadvantage in that the number of characters is limited to 36. Thus, a 12 × 12 matrix that includes 144 characters is used. Third, in order to reduce the highlighting time of each of the 12 × 12 rows and columns, only the three rows and three columns (which are determined on the basis of the 3 × 3 area centered on the users gaze position) are highlighted. Fourth, by analyzing the P300 EEG signal that is obtained only when each of the 3 × 3 rows and columns is highlighted, the accuracy of selecting the correct character is enhanced. The experimental results showed that the accuracy of proposed method was higher than the other methods.
Frontiers in Human Neuroscience | 2013
Daekeun Kim; Kyung-Mi Lee; Jongwha Kim; Mincheol Whang; Seung Wan Kang
This study is aimed to determine significant physiological parameters of brain and heart under meditative state, both in each activities and their dynamic correlations. Electrophysiological changes in response to meditation were explored in 12 healthy volunteers who completed 8 weeks of a basic training course in autogenic meditation. Heart coherence, representing the degree of ordering in oscillation of heart rhythm intervals, increased significantly during meditation. Relative EEG alpha power and alpha lagged coherence also increased. A significant slowing of parietal peak alpha frequency was observed. Parietal peak alpha power increased with increasing heart coherence during meditation, but no such relationship was observed during baseline. Average alpha lagged coherence also increased with increasing heart coherence during meditation, but weak opposite relationship was observed at baseline. Relative alpha power increased with increasing heart coherence during both meditation and baseline periods. Heart coherence can be a cardiac marker for the meditative state and also may be a general marker for the meditative state since heart coherence is strongly correlated with EEG alpha activities. It is expected that increasing heart coherence and the accompanying EEG alpha activations, heart brain synchronicity, would help recover physiological synchrony following a period of homeostatic depletion.
IEEE Transactions on Consumer Electronics | 2011
Dong Keun Kim; Jonghwa Kim; Eui Chul Lee; Mincheol Whang; Yongjoo Cho
In this paper, we implemented an interactive emotional content communication system using a portable wireless biofeedback device to support convenient emotion recognition and immersive emotional content representation for users. The newly designed system consists of the portable wireless biofeedback device and a novel emotional content rendering system. The former performs the acquisition and transmission of three different physiological signals (photoplethysmography, skin temperature, and galvanic skin response) to the remote emotional content rendering system via Bluetooth links in real time. The latter displays video content concurrently manipulated using the feedback of the user¿s emotional state. The results of effectiveness of the system indicated that the response time of the emotional content communication system was nearly instant, the changes of between emotional contents and emotional states base on physiological signals was corresponded. The user¿s concentration was increased by watching the measuredemotion- based rendered visual stimuli. In the near future, the users of this proposed system will be able to create further substantial user-oriented content based on emotional changes.
Human Factors | 2003
Mincheol Whang; Joa Sang Lim; Wolfram Boucsein
Despite rapid advances in technology, computers remain incapable of responding to human emotions. An exploratory study was conducted to find out what physiological parameters might be useful to differentiate among 4 emotional states, based on 2 dimensions: pleasantness versus unpleasantness and arousal versus relaxation. The 4 emotions were induced by exposing 26 undergraduate students to different combinations of olfactory and auditory stimuli, selected in a pretest from 12 stimuli by subjective ratings of arousal and valence. Changes in electroencephalographic (EEG), heart rate variability, and electrodermal measures were used to differentiate the 4 emotions. EEG activity separates pleasantness from unpleasantness only in the aroused but not in the relaxed domain, where electrodermal parameters are the differentiating ones. All three classes of parameters contribute to a separation between arousal and relaxation in the positive valence domain, whereas the latency of the electrodermal response is the only differentiating parameter in the negative domain. We discuss how such a psychophysiological approach may be incorporated into a systemic model of a computer responsive to affective communication from the user.
Multimedia Tools and Applications | 2016
Muhammad Hameed Siddiqi; Rahman Ali; Muhammad Idris; Adil Mehmood Khan; Eun Soo Kim; Mincheol Whang; Sungyoung Lee
To recognize expressions accurately, facial expression systems require robust feature extraction and feature selection methods. In this paper, a normalized mutual information based feature selection technique is proposed for FER systems. The technique is derived from an existing method, that is, the max-relevance and min-redundancy (mRMR) method. We, however, propose to normalize the mutual information used in this method so that the domination of the relevance or of the redundancy can be eliminated. For feature extraction, curvelet transform is used. After the feature extraction and selection the feature space is reduced by employing linear discriminant analysis (LDA). Finally, hidden Markov model (HMM) is used to recognize the expressions. The proposed FER system (CNF-FER) is validated using four publicly available standard datasets. For each dataset, 10-fold cross validation scheme is utilized. CNF-FER outperformed the existing well-known statistical and state-of-the-art methods by achieving a weighted average recognition rate of 99 % across all the datasets.