Chung Hyuk Park
George Washington University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chung Hyuk Park.
intelligent robots and systems | 2007
Chung Hyuk Park; Ayanna M. Howard
In this paper, we discuss a methodology that employs vision-based force guidance techniques for improving human performance with respect to a teleoperated manipulation system. The primary focus of the approach is to study the effectiveness of guidance forces in a haptic system to enable ease-of-use for human operators performing common manipulation activities necessary for achievement of everyday tasks. By designing force feedback signals constructed only from visual imagery data as input into a haptic device, we show the impact on human performance during the teleoperation sequence. The methodology is explained in detail, and results of implementation on object-centering and object-approaching tasks with our divided force guidance approach are presented.
IEEE Transactions on Haptics | 2015
Chung Hyuk Park; Eun-Seok Ryu; Ayanna M. Howard
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
human-robot interaction | 2012
Chung Hyuk Park; Ayanna M. Howard
Robotic assistance through telepresence technology is an emerging area in aiding the visually impaired. By integrating the robotic perception of a remote environment and transferring it to a human user through haptic environmental feedback, the disabled user can increase ones capability to interact with remote environments through the telepresence robot. This paper presents a framework that integrates visual perception from heterogeneous vision sensors and enables real-time interactive haptic representation of the real world through a mobile manipulation robotic system. Specifically, a set of multi-disciplinary algorithms such as stereovision processes, three-dimensional map building algorithms, and virtual-proxy haptic rendering processes are integrated into a unified framework to accomplish the goal of real-world haptic exploration successfully. Results of our framework in an indoor environment are displayed, and its performances are analyzed. Quantitative results are provided along with qualitative results through a set of human subject testing. Our future work includes real-time haptic fusion of multi-modal environmental perception and more extensive human subject testing in a prolonged experimental design.
world haptics conference | 2013
Chung Hyuk Park; Ayanna M. Howard
This paper presents a robotic system that provides telepresence to the visually impaired by combining real-time haptic rendering with multi-modal interaction. A virtual-proxy based haptic rendering process using a RGB-D sensor is developed and integrated into a unified framework for control and feedback for the telepresence robot. We discuss the challenging problem of presenting environmental perception to a user with visual impairments and our solution for multi-modal interaction. We also explain the experimental design and protocols, and results with human subjects with and without visual impairments. Discussion on the performance of our system and our future goals are presented toward the end.
computer based medical systems | 2013
Chung Hyuk Park; Kenneth Wilson; Ayanna M. Howard
Virtual reality (VR) surgical training can be a potentially useful method for improving practicing surgical skills. However, the current literature on VR training has not discussed the efficacy of VR systems that are useful outside of the training facility. As such, the goal of this study is to evaluate the benefits of using a low-cost VR simulation system for providing a method to increase the learning of surgical skills. Our pilot case focuses on laparoscopic cholecystectomy, which is one of the most common surgeries currently performed in the United States and is often used as the training case for laparoscopy due to its high frequency and perceived low risk. The specific aim of this study is to examine the efficacy of a low-cost haptic-based VR surgical simulator on improving practicing surgical skills, measured by the change in the learning effect of students.
ieee haptics symposium | 2010
Chung Hyuk Park; Ayanna M. Howard
In this paper, we propose a new concept of haptic exploration using a mobile manipulation system, which combines the mobility and manipulability of the robot with haptic feedback for user interaction. The system utilizes and integrates heterogeneous robotic sensor readings to create a real-time spatial model of the environment, which in turn can be conveyed to the user to explore the haptically represented environment and spatially perceive the world without direct contact. The real-world values are transformed into an environmental model (an internal map) by the sensors, and the environmental model is used to create environmental feedback on the haptic device which interacts in the haptically augmented space. Through this multi-scale convergence of dynamic sensor data and haptic interaction, our goal is to enable real-time exploration of the world through remote interfaces without the use of predefined world models. In this paper, the system algorithms and platform are discussed, along with preliminary results to show the capabilities of the system.
joint ieee international conference on development and learning and epigenetic robotics | 2015
Chung Hyuk Park; Myounghoon Jeon; Ayanna M. Howard
The rapid increase in the population of children with autism spectrum disorder (ASD) in the United States has revealed urgent needs for therapeutic accessibility for children with ASD in the domain of emotion and social interaction. There have been a number of approaches with robotic therapeutic systems [1,2] with intriguing approaches and results. However, the spectral diversity of ASD is so wide that we still need to push forward research to provide parameterized therapeutic tools and frameworks. We focus on the recent studies that reveal strong relevance in premotor cortex among neural domains for music, emotion, and motor behaviors [3,4]. We hypothesize that musical interaction and physical activities can provide a new therapeutic domain for effective development in the childrens emotion and social interaction. To investigate this challenging problem, we propose to develop autonomous interaction methods for robots to effectively stimulate the emotional and social interactivity of children. The objectives of this collaborative effort are to promote emotional and social engagement with children with autism as well as provide parametrized metrics for clinicians and parents for prolonged and quantifiable clinical settings.
human robot interaction | 2016
Rachael Bevill; Chung Hyuk Park; Hyung Jung Kim; Jong Won Lee; Ariana Rennie; Myounghoon Jeon; Ayanna M. Howard
We present an interactive robotic framework that delivers emotional and social behaviors for multi-sensory therapy for children with autism spectrum disorders. Our framework includes emotion-based robotic gestures and facial expressions, as well as vision and audio-based monitoring system for quantitative measurement of the interaction. We also discuss the special aspects of interacting with children with autism with multi-sensory stimuli and the potentials of our approach for personalized therapies for social and behavioral learning.
international conference on robotics and automation | 2011
Chung Hyuk Park; Sekou Remy; Ayanna M. Howard
In this paper, we discuss an approach for enabling students with a visual impairment (VI) to validate the program sequence of a robotic system operating in the real world. We introduce a method that enables the person with VI to feel their robots movement as well as the environment in which the robot is traveling. The design includes a human-robot interaction framework that utilizes multi-modal feedback to transfer the environmental perception to a human user with VI. Haptic feedback and auditory feedback are selected as primary methods for user interaction. Using this multi-modal sensory feedback approach, participants are taught to program their own robot to accomplish varying navigation tasks. We discuss and analyze the implementation of the method as deployed during two summer camps for middle-school students with visual impairment.
international conference on ubiquitous robots and ambient intelligence | 2017
Jonathan C. Kim; Paul Azzi; Myounghoon Jeon; Ayanna M. Howard; Chung Hyuk Park
Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.