Mun-Taek Choi
Sungkyunkwan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mun-Taek Choi.
PLOS ONE | 2015
Geon Ha Kim; Seun Jeon; Kiho Im; Hunki Kwon; Byung Hwa Lee; Ga Young Kim; Hana Jeong; Noh Eul Han; Sang Won Seo; Hanna Cho; Young Noh; Sang Eon Park; Hojeong Kim; Jung Won Hwang; Cindy W. Yoon; Hee-Jin Kim; Byoung Seok Ye; Ju Hee Chin; Jung-Hyun Kim; Mee Kyung Suh; Jong-Min Lee; Sung Tae Kim; Mun-Taek Choi; Munsang Kim; Kenneth M. Heilman; Jee Hyang Jeong; Duk L. Na
The purpose of this study was to investigate if multi-domain cognitive training, especially robot-assisted training, alters cortical thickness in the brains of elderly participants. A controlled trial was conducted with 85 volunteers without cognitive impairment who were 60 years old or older. Participants were first randomized into two groups. One group consisted of 48 participants who would receive cognitive training and 37 who would not receive training. The cognitive training group was randomly divided into two groups, 24 who received traditional cognitive training and 24 who received robot-assisted cognitive training. The training for both groups consisted of daily 90-min-session, five days a week for a total of 12 weeks. The primary outcome was the changes in cortical thickness. When compared to the control group, both groups who underwent cognitive training demonstrated attenuation of age related cortical thinning in the frontotemporal association cortices. When the robot and the traditional interventions were directly compared, the robot group showed less cortical thinning in the anterior cingulate cortices. Our results suggest that cognitive training can mitigate age-associated structural brain changes in the elderly. Trial Registration ClnicalTrials.gov NCT01596205
Journal of Sports Sciences | 2016
Ahnryul Choi; In-Kwang Lee; Mun-Taek Choi; Joung Hwan Mun
ABSTRACT Understanding of the inter-joint coordination between rotational movement of each hip and trunk in golf would provide basic knowledge regarding how the neuromuscular system organises the related joints to perform a successful swing motion. In this study, we evaluated the inter-joint coordination characteristics between rotational movement of the hips and trunk during golf downswings. Twenty-one right-handed male professional golfers were recruited for this study. Infrared cameras were installed to capture the swing motion. The axial rotation angle, angular velocity and inter-joint coordination were calculated by the Euler angle, numerical difference method and continuous relative phase, respectively. A more typical inter-joint coordination demonstrated in the leading hip/trunk than trailing hip/trunk. Three coordination characteristics of the leading hip/trunk reported a significant relationship with clubhead speed at impact (r < −0.5) in male professional golfers. The increased rotation difference between the leading hip and trunk in the overall downswing phase as well as the faster rotation of the leading hip compared to that of the trunk in the early downswing play important roles in increasing clubhead speed. These novel inter-joint coordination strategies have the great potential to use a biomechanical guideline to improve the golf swing performance of unskilled golfers.
Computer Methods and Programs in Biomedicine | 2018
Mohammed A. Al-masni; Mugahed A. Al-antari; Mun-Taek Choi; Seung-Moo Han; Tae-Seong Kim
BACKGROUND AND OBJECTIVE Automatic segmentation of skin lesions in dermoscopy images is still a challenging task due to the large shape variations and indistinct boundaries of the lesions. Accurate segmentation of skin lesions is a key prerequisite step for any computer-aided diagnostic system to recognize skin melanoma. METHODS In this paper, we propose a novel segmentation methodology via full resolution convolutional networks (FrCN). The proposed FrCN method directly learns the full resolution features of each individual pixel of the input data without the need for pre- or post-processing operations such as artifact removal, low contrast adjustment, or further enhancement of the segmented skin lesion boundaries. We evaluated the proposed method using two publicly available databases, the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 Challenge and PH2 datasets. To evaluate the proposed method, we compared the segmentation performance with the latest deep learning segmentation approaches such as the fully convolutional network (FCN), U-Net, and SegNet. RESULTS Our results showed that the proposed FrCN method segmented the skin lesions with an average Jaccard index of 77.11% and an overall segmentation accuracy of 94.03% for the ISBI 2017 test dataset and 84.79% and 95.08%, respectively, for the PH2 dataset. In comparison to FCN, U-Net, and SegNet, the proposed FrCN outperformed them by 4.94%, 15.47%, and 7.48% for the Jaccard index and 1.31%, 3.89%, and 2.27% for the segmentation accuracy, respectively. Furthermore, the proposed FrCN achieved a segmentation accuracy of 95.62% for some representative clinical benign cases, 90.78% for the melanoma cases, and 91.29% for the seborrheic keratosis cases in the ISBI 2017 test dataset, exhibiting better performance than those of FCN, U-Net, and SegNet. CONCLUSIONS We conclude that using the full spatial resolutions of the input image could enable to learn better specific and prominent features, leading to an improvement in the segmentation performance.
International Journal of Medical Informatics | 2018
Mugahed A. Al-antari; Mohammed A. Al-masni; Mun-Taek Choi; Seung-Moo Han; Tae-Seong Kim
A computer-aided diagnosis (CAD) system requires detection, segmentation, and classification in one framework to assist radiologists efficiently in an accurate diagnosis. In this paper, a completely integrated CAD system is proposed to screen digital X-ray mammograms involving detection, segmentation, and classification of breast masses via deep learning methodologies. In this work, to detect breast mass from entire mammograms, You-Only-Look-Once (YOLO), a regional deep learning approach, is used. To segment the mass, full resolution convolutional network (FrCN), a new deep network model, is proposed and utilized. Finally, a deep convolutional neural network (CNN) is used to recognize the mass and classify it as either benign or malignant. To evaluate the proposed integrated CAD system in terms of the accuracies of detection, segmentation, and classification, the publicly available and annotated INbreast database was utilized. The evaluation results of the proposed CAD system via four-fold cross-validation tests show that a mass detection accuracy of 98.96%, Matthews correlation coefficient (MCC) of 97.62%, and F1-score of 99.24% are achieved with the INbreast dataset. Moreover, the mass segmentation results via FrCN produced an overall accuracy of 92.97%, MCC of 85.93%, and Dice (F1-score) of 92.69% and Jaccard similarity coefficient metrics of 86.37%, respectively. The detected and segmented masses were classified via CNN and achieved an overall accuracy of 95.64%, AUC of 94.78%, MCC of 89.91%, and F1-score of 96.84%, respectively. Our results demonstrate that the proposed CAD system, through all stages of detection, segmentation, and classification, outperforms the latest conventional deep learning methodologies. Our proposed CAD system could be used to assist radiologists in all stages of detection, segmentation, and classification of breast masses.
International Journal of Pharma Medicine and Biological Sciences | 2017
Patricio Rivera; Edwin Valarezo; Mun-Taek Choi; Tae-Seong Kim
Recognition of hand activities could provide new information towards daily human activity logging and gesture interface applications. However, there is a technical challenge due to delicate hand motions and complex movement contexts. In this work, we proposed hand activity recognition (HAR) based on a single inertial measurement unit (IMU) sensor at one wrist via deep learning recurrent neural network. The proposed HAR works directly with signals from a tri-axial accelerometer, gyroscope, and magnetometer sensors within one IMU. We evaluated the performance of our HAR with a public human hand activity database for six hand activities including Open Door, Close Door, Open Fridge, Close Fridge, Clean Table and Drink from Cup. Our results show an overall recognition accuracy of 80.09% with discrete standard epochs and 74.92% with noise-added epochs. With continuous time series epochs, the accuracy of 71.75% was obtained.
Journal of Institute of Control, Robotics and Systems | 2013
Sang Seok Yun; Munsang Kim; Mun-Taek Choi; Jae Bok Song
Abstract: According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction. Keywords: human-robot interaction, human intention analysis, multiple-person interactions, confidential reasoning I. 서론 일상생활 속에서, 다른 사람과 상호작용을 수행하는 인간의 지적 과정을 살펴보면 인간은 오감센서로부터 매 순간 다양한 시공간적 인지 활동을 통하여 상대방의 언어적 비언어적 신호를 습득하고, 추론과 학습 과정을 통하여 상호작용 상황으로부터 대상자에 대한 의도를 파악하고 자의적 의사를 결정하여 그에 따르는 대응 행동을 표현하게 된다[1]. 특히, 사람의 의도 파악은 기능적으로 다른 사람의 행동을 목격함에 의한 뇌 영상 증거를 가지는 심리학적 추론 능력과 정신적 결정에 의해 이루어지게 된다[2]. 이러한 사용자의 의도나 감정을 나타내기 위한 다양한 행동 표현 방식에 있어서 언어적 요소에 비해 비언어적 요소는 더욱 효과적이며 높은 전달력을 가지고 있다. 그 이유는 말의 내용보다는 목소리의 강약과 떨림, 시선, 제스처, 억양, 표정, 자세 등이 보다 많은 내면적 정보를 가지고 있기 때문이다[3]. 또한, 사람들 간의 면대면 상호작용에서 상대방의 호감도에 영향을 미치는 요소에 대한 연구에서도 말(7%), 음성 톤(38%), 그리고 몸짓(55%)의 비율로 비언어적 요소가 대부분(93%)을 차지하고 있다[4]. 이러한 비언어적 의사전달은 언어로 표현되는 정보를 보다 정확하게 전달하려는 보조 수단뿐만 아니라 한정된 언어적 능력 이상의 것을 표현하기 위한 고차원적 의사소통 기능도 수행 가능함을 보여준다[5]. 상호작용을 수행하는 로봇이 이러한 복합적인 상황을 이해하고 다양한 지능적 서비스를 제공하기 위해서는 인간의 인지 과정과 유사한 기능 구현이 필요하다. 특히, 많은 사람이 방문하는 박물관이나 쇼핑몰과 같은 동적 환경에서 복수의 사람들을 대상으로 적극적으로 대화 의도를 파악하고 상황에 적절하게 대응하는 로봇은 사용자와의 효과적인 상호작용을 수행하는 데 반드시 필요한 인지적 요소이다. 하지만 사회적 상호작용을 수행하는 대부분의 연구는 교시자에 의한 사회적 학습[6,7]이나 감정 모델에 기반한 사용자와의 정서적 관계[8,9]를 증진하는 데 초점을 두고 있어서 다중 사용자와의 사회적 관계를 형성하는 데는 한계가 존재한다. 따라서, 로봇이 이러한 다수의 사용자를 대상으로 상호작용 의도를 효과적으로 파악하기 위해서는 사용자의 다양한 비언어적인 행동으로부터 신뢰성을 가지는 강인한 인식 및 복합적인 사용자 별 평가 모델이 필수적이다. 로봇의 인식 부분에 있어서, 멀티 모달리티는 단일 모달리티의 상호 보완적 요소를 가지고 있어서 비전과 거리 센서를 이용한 사용자의 위치 검출 및 추적[10], 비전과 음성 센서를 이용한 다중 사용자로부터의 발화자 선정[11] 등 사용자 정보의 신뢰성을 향상시키는데 다양하게 적용되어 왔다. 하지
Computer Methods and Programs in Biomedicine | 2018
Mohammed A. Al-masni; Mugahed A. Al-antari; Jeong-min Park; Geon Gi; T.-S. Kim; Patricio Rivera; Edwin Valarezo; Mun-Taek Choi; Seung-Moo Han; Tae-Seong Kim
Journal of Motor Behavior | 2017
Taeyong Sim; Hakje Yoo; Ahnryul Choi; Ki Young Lee; Mun-Taek Choi; Soeun Lee; Joung Hwan Mun
Alzheimers & Dementia | 2013
Geon Ha Kim; Seun Jeon; Kiho Im; Sang Won Seo; Hanna Cho; Young Noh; Cindy W. Yoon; Hee-Jin Kim; Byoung Seok Ye; Ju Hee Chin; Byoung Hwa Lee; Sang Eon Park; Ho Jeong Kim; Mun-Taek Choi; Munsang Kim; Duk L. Na
Journal of Intelligent and Robotic Systems | 2018
Mun-Taek Choi; Jinseob Yeom; Yunhee Shin; Injun Park