Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sang-Seok Yun is active.

Publication


Featured researches published by Sang-Seok Yun.


international conference on social robotics | 2011

Engkey: tele-education robot

Sang-Seok Yun; Jongju Shin; Daijin Kim; Chang Gu Kim; Munsang Kim; Mun-Taek Choi

In this paper, we introduce a new form of an English education system that utilizes a teleoperated robot controlled by a native human teacher in a remote site. By providing a unique operation interface that incorporates non-contact vision recognition technologies, a native teacher easily and naturally controls the robot from a distance while providing lectures. Due to its effective representation of a human teacher in a robot, students show great interest on the robot and feel comfortable learning English with the teleoperated robot. In a real field pilot study, the participated elementary students have achieved good improvements on standardized tests after the study, which shows the effectiveness of the teleoperated robot system.


Robotics and Autonomous Systems | 2016

A robot-assisted behavioral intervention system for children with autism spectrum disorders

Sang-Seok Yun; Hyuksoo Kim; JongSuk Choi; Sung-Kee Park

The purpose of this paper is to propose and examine the feasibility of a robot-assisted intervention system capable of facilitating social training for children with autism spectrum disorder (ASD) via human-robot interaction (HRI) architecture. Based on the well-known discrete trial teaching (DTT) protocol for the therapy of children with ASD, our control architecture configures four modules-human perception, user input, the interaction manager, and the robot effector-such that the robot system generates differentiated training stimuli using motivation and Stroop paradigms and automatically copes with the childs response by using reliable human recognition and interaction technologies. Using these configurations, the proposed system performs the role of training the social skills of basic eye contact and reading emotions in children with ASD. By examining reliable performance evaluations and the positive effect of the training process targeting preschoolers with a high functioning level, we then verify that the proposed system can induce a positive transition in the response of children with ASD and the possibility of a labor-saving effect in carrying out autism treatments. Robot-assisted behavioral intervention based on automated interaction technologies is proposed.Socially validated therapeutic protocol is applied to child-robot interaction.Reliable eye contact detection is reviewed as clinical evidence.Differentiated robotic stimulation is designed to attract childrens attention.Reinforcement procedure with coping strategy is verified in repetitive training routines for clinical trials.


International Journal of Social Robotics | 2013

Easy Interface and Control of Tele-education Robots

Sang-Seok Yun; Munsang Kim; Mun-Taek Choi

In this paper, we propose a new form of teaching robot system for English classes, which utilizes a tele-operated robot controlled by a teacher from a remote site. By providing a unique operation interface that incorporates non-contact vision recognition technologies, a teacher can easily control the robot from a distance to provide lectures. A robot mechanism with a 3D facial avatar also makes it possible for dynamic behavior and movement controlled from a remote site. Due to the robot’s ability to act in a human-like way, students show great interest and feel at ease learning English from the tele-operated robot. In a field pilot study, elementary students who participated have shown improvements on their standardized tests after the study, which means the tele-operated robot system could contribute effectively to educational systems, particularly in the area of English education.


robot and human interactive communication | 2014

A robotic treatment approach to promote social interaction skills for children with autism spectrum disorders

Sang-Seok Yun; Sung-Kee Park; JongSuk Choi

In this paper, we propose with a robot-assisted behavioral intervention system to easily improve childrens social capability. In particular, the system for children with autism spectrum disorders (ASD) is basically achieved through the discrete trial teaching (DTT) protocol with three task modes of therapy, encouragement, and pause in social training scenarios. In child-robot interaction architecture, the robot firstly offers therapeutic training elements of mutual greeting and interplay game, and evaluates the level of childrens reactivity by recognition modules for frontal face and touch features. Thence, the system in the decision-making process determines the task mode to perform subsequent action by grasping behavioral state of the children, and then it copes with individual response appropriately by using the robotic stimuli with the combination of kinesic acts and displayable contents. From the experiments of clinical trials with children with non-ASD and ASD in each robotic stimulus, the system showed the potential to increase their attention and activeness for social training, and we believe that the proposed system has some positive effect on developing childrens social skills.


Autism Research | 2017

Social skills training for children with autism spectrum disorder using a robotic behavioral intervention system

Sang-Seok Yun; JongSuk Choi; Sung-Kee Park; Gui‐Young Bong; Hee-Jeong Yoo

We designed a robot system that assisted in behavioral intervention programs of children with autism spectrum disorder (ASD). The eight‐session intervention program was based on the discrete trial teaching protocol and focused on two basic social skills: eye contact and facial emotion recognition. The robotic interactions occurred in four modules: training element query, recognition of human activity, coping‐mode selection, and follow‐up action. Children with ASD who were between 4 and 7 years old and who had verbal IQ ≥ 60 were recruited and randomly assigned to the treatment group (TG, n = 8, 5.75 ± 0.89 years) or control group (CG, n = 7; 6.32 ± 1.23 years). The therapeutic robot facilitated the treatment intervention in the TG, and the human assistant facilitated the treatment intervention in the CG. The intervention procedures were identical in both groups. The primary outcome measures included parent‐completed questionnaires, the Autism Diagnostic Observation Schedule (ADOS), and frequency of eye contact, which was measured with the partial interval recording method. After completing treatment, the eye contact percentages were significantly increased in both groups. For facial emotion recognition, the percentages of correct answers were increased in similar patterns in both groups compared to baseline (P > 0.05), with no difference between the TG and CG (P > 0.05). The subjects’ ability to play, general behavioral and emotional symptoms were significantly diminished after treatment (p < 0.05). These results showed that the robot‐facilitated and human‐facilitated behavioral interventions had similar positive effects on eye contact and facial emotion recognition, which suggested that robots are useful mediators of social skills training for children with ASD. Autism Res 2017,.


emerging technologies and factory automation | 2014

Audio-visual integration for human-robot interaction in multi-person scenarios

Quang Nguyen; Sang-Seok Yun; JongSuk Choi

This paper presents the integration of audio-visual perception components for human robot interaction in the Robot Operating System (ROS). Visual-based nodes consist of skeleton tracking and gesture recognition using a depth camera, and face recognition using an RGB camera. Auditory perception is based on sound source localization using a microphone array. We present an integration framework of these nodes using a top-down hierarchical messaging protocol. On the top of the integration, a message carries information about the number of persons and their corresponding states (who, what, where), which are updated from many low-level perception nodes. The top message is passed to a planning node to make a reaction of the robot, according to the perception about surrounding people. This paper demonstrates human-robot interaction in multi-persons scenario where robot pays its attention to the speaking or waving hand persons. Moreover, this modularization architecture enables reusing modules for other applications. To validate this approach, two sound source localization algorithms are evaluated in real-time where ground-truth localization is provided by the face recognition module.


international conference on ubiquitous robots and ambient intelligence | 2016

Distributed sensor networks for multiple human recognition in indoor environments

Sang-Seok Yun; Quang Nguyen; JongSuk Choi

In this paper, we propose distributed sensor networks (DSNs) capable of performing reliable recognition targeted at multiple humans in the indoor environments. DSNs are composed with combinations of perception sensor units using a RGB-D sensor and a pan-tilt-zoom camera, and a control board to acquire 3W results of who, where, and what information based on audio-visual perception modules. In addition, fusion methods are utilized to associate with multiple human detection and tracking, face identification, and daily activity recognition. By evaluating the performance of DSNs in a classroom setting, it was confirmed that the proposed system can help to perform the tasks of various purposes.


Journal of Intelligent and Robotic Systems | 2012

Proactive Human Search for the Designated Person with Prior Context Information in an Undiscovered Environments

Sang-Seok Yun; Bongjin Jun; Daijin Kim; Jaewoong Kim; Sukhan Lee; Mun-Taek Choi; Munsang Kim; Joong Tae Park; Jae Bok Song

This paper describes a scheme for proactive human search for a designated person in an undiscovered indoor environment without human operation or intervention. In designing and developing human identification with prior information a new approach that is robust to illumination and distance variations in the indoor environment is proposed. In addition, a substantial exploration method with an octree structure, suitable for path planning in an office configuration, is employed. All these functionalities are integrated in a message- and component-based architecture for the efficient integration and control of the system. This approach is demonstrated by succeeding human search in the challenging robot mission of the 2009 Robot Grand Challenge Contest.


robot and human interactive communication | 2007

The Development of Easy Interaction Room

Sang-Seok Yun; Changho Kim; Jonghoon Kim; Mun-Taek Choi; Munsang Kim

This paper introduces the system development of the ubiquitous computing environment, called easy interaction room (EIR), in which humans can get intelligent robotic services with easy interaction. The room has been built to research the roles between the ubicomp environment and service robot in it. Also EIR system collaborates on a special home service with the monolithic robot platform. In this development, cutting-edge technologies are employed on the design and implementation of hardware infrastructure for the automated environment. Intelligent Robot Software Architecture, developed by CIR for reusable and extensible architecture, is used to build EIRs software architecture for rapid development. Preliminary correlations between EIR and a service robot have been designed and implemented on the architectures.


international conference on ubiquitous robots and ambient intelligence | 2017

Reliable multi-person identification using DCNN-based face recognition algorithm and scale-ratio method

Junghoon Kim; Sang-Seok Yun; Bong-Nam Kang; Daijin Kim; JongSuk Choi

Recently, deep convolutional neural networks (DC-NNs) have set a new trend in the computer vision community by improving the state-of-the-art performance in almost all of applications. We propose DCNN-based face recognition algorithm. This paper aims at analyzing and verifying considerations when the proposed method is implemented in a real environment. First, Multiple images of the same scene are processed. Also, this novel method is considered together with a method to reduce the total recognition time in the perspective of integrated system. We analyzed the experimental data and evaluated the performance by setting up the sensor fusion network in actual classroom.

Collaboration


Dive into the Sang-Seok Yun's collaboration.

Top Co-Authors

Avatar

JongSuk Choi

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sung-Kee Park

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Munsang Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Quang Nguyen

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Daijin Kim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Geunjae Lee

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kijin An

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Changho Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hyuksoo Kim

Korea Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge