Young-Sook Jeong
Electronics and Telecommunications Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Young-Sook Jeong.
international conference on advanced communication technology | 2017
Mi-Young Cho; Young-Sook Jeong
Intelligent service robots use gesture recognition technology that utilizes the MS Kinect sensor to facilitate natural interactions between humans and robots. To evaluate gesture recognition performance in a real-life environment, we constructed a new gesture database that takes into account cluttered backgrounds, various distances and poses, and movement of robots, and then we evaluated the gesture recognition performance of commercial robots. In this paper, we seek to help consumers, robot manufacturers, and gesture recognition engine developers provide comparable results for the gesture recognition capabilities of service robots.
research in adaptive and convergent systems | 2015
Mi-Young Cho; Young-Sook Jeong; Byung-Tae Chun
Face recognition technology is more direct, user friendly, and convenient compared to other biometric methods. Recently, it has become a widely applied intelligent service robots. However, the performance of facial recognition engines does not always satisfy the expectations of users. This can affect the reliability of robots, for example, that implement a face recognition function. A majority of facial recognition performance test methods use static images that cannot reflect dynamic factors. Consequently, there is a disparity between the performance of the algorithms and performance in real service environments. Moreover, the performance of a face recognition engine cannot guarantee the performance of a robot. Although user demand for performance is increasing, it is difficult to evaluate owing to a lack of test methods and testing environments. In this paper, we demonstrate the similarity between real faces and facial videos from the perspective of face recognition and prove the effectiveness of the evaluation method using a display device.
soft computing | 2017
Mi-Young Cho; Young-Sook Jeong
Face recognition is a widely used biometric technology because it is both user friendly and more convenient to use than other biometric approaches. However, naïve face recognition systems that do not support any type of liveness detection can be easily spoofed using just a photograph of a valid user. Face liveness detection is a key issue in the field of security systems that use a camera. Unfortunately, it is not easy to detect face liveness using existing methods, assuming that there are print failures and overall image blur. With the development of display devices and image capturing technology, it is possible to reproduce face images similar to real faces. Therefore, the number of attacks using a photograph or video displayed on a screen rather than paper will increase. In this study, we compare test results using live faces and high-definition face videos from light-emitting diode (LED) display devices and analyze the changes in face recognition performance according to the lighting direction. Experimental results show that there is no significant difference between live faces and not live faces under good lighting conditions. We suggest the use of gamma to reduce the performance gap between the two faces under poor lighting conditions. From these results, we can provide key solutions to resolve the issues associated with texture-based approaches.
international conference on control automation and systems | 2016
Mi-Young Cho; Young-Sook Jeong
With the development of face recognition technology, it is already being applied in various fields such as security and robotics. However, products associated with face recognition systems are difficult to commercialize because of the absence of evaluation methods that reflect the actual service environment. The best way to evaluate face recognition performance is evaluation by real people or methods that can guarantee objectivity and reproducibility. Recently, with the development of capture and display devices, it has become possible to reproduce color similar to those in real faces. Consequently, we previously proposed a performance evaluation method that uses a high-definition display device in place of a real face and verified the proposed method from the point of view face recognition performance. In this paper, we measure the difference between a real face and face images from the display device from the point of view of skin color as a low-level feature.
international conference on ubiquitous robots and ambient intelligence | 2015
Mi-Young Cho; Young-Sook Jeong; Hyun-Suh Kim; Howard Kim
As face recognition technology is conveniently accessible with only the use of a camera, it is a widely human-robot-interaction. In face recognition, skin color, as a cue, separates the targets from background depends on the color dissimilarity between targets and background. In this paper, we measure color difference between real faces and facial images from a sRGB monitor and verify similarity.
Information and Communication Technology - EurAsia Conference | 2015
Mi-Young Cho; Young-Sook Jeong
Face recognition technology, unlike other biometric methods, is conveniently accessible with the use of only a camera. Consequently, it has created an enormous interest in a variety of applications, including face identification, access control, security, surveillance, smart cards, law enforcement, human computer interaction. However, face recognition system is still not robust enough, especially in unconstrained environments, and recognition accuracy is still not acceptable. In this paper, to measure performance reliability of face recognition systems, we expand performance comparison test between real faces and face images from the recognition perspective and verify the adequacy of performance test methods using an image display device.
international conference on ubiquitous robots and ambient intelligence | 2014
Minjong Kim; Suyoung Chi; Young-Sook Jeong
This paper presents human segmentation using parts of human body shape, which is for recognizing human pose on sports simulator. It is very difficult to segment human body in unsettled environment which includes various clothes style, illumination or the shape of human body. Principle feature points are extracted in seven parts of the human body: head, shoulder, elbow, hand, hip, knee, and foot. Five parts of human body such as head, hand, hip, knee and foot segmented using hierarchical template matching and shape average image. The other parts of human body segmented using shadow edge and geometric information because shoulder and elbow boundary lines are ambiguous so these inner parts need to delicate processing. To evaluate proposed algorithm, 200 pose images for 5 human on ETRI database have used and we have acquired an encouraging experimental result of 85 percent on average.
J. Internet Serv. Inf. Secur. | 2014
Mi-Young Cho; Young-Sook Jeong
Journal of the Institute of Electronics Engineers of Korea | 2013
Mi-Young Cho; Young-Sook Jeong; Byung-Tae Chun
제어로봇시스템학회 국제학술대회 논문집 | 2014
Minjong Kim; Young-Sook Jeong; Suyoung Chi; Mi-Young Cho