Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daehyung Park is active.

Publication


Featured researches published by Daehyung Park.


international conference on robotics and automation | 2016

Multimodal execution monitoring for anomaly detection during robot manipulation

Daehyung Park; Zackory M. Erickson; Tapomayukh Bhattacharjee; Charles C. Kemp

Online detection of anomalous execution can be valuable for robot manipulation, enabling robots to operate more safely, determine when a behavior is inappropriate, and otherwise exhibit more common sense. By using multiple complementary sensory modalities, robots could potentially detect a wider variety of anomalies, such as anomalous contact or a loud utterance by a human. However, task variability and the potential for false positives make online anomaly detection challenging, especially for long-duration manipulation behaviors. In this paper, we provide evidence for the value of multimodal execution monitoring and the use of a detection threshold that varies based on the progress of execution. Using a data-driven approach, we train an execution monitor that runs in parallel to a manipulation behavior. Like previous methods for anomaly detection, our method trains a hidden Markov model (HMM) using multimodal observations from non-anomalous executions. In contrast to prior work, our system also uses a detection threshold that changes based on the execution progress. We evaluated our approach with haptic, visual, auditory, and kinematic sensing during a variety of manipulation tasks performed by a PR2 robot. The tasks included pushing doors closed, operating switches, and assisting able-bodied participants with eating yogurt. In our evaluations, our anomaly detection method performed substantially better with multimodal monitoring than single modality monitoring. It also resulted in more desirable ROC curves when compared with other detection threshold methods from the literature, obtaining higher true positive rates for comparable false positive rates.


intelligent robots and systems | 2015

Combining tactile sensing and vision for rapid haptic mapping

Tapomayukh Bhattacharjee; Ashwin A. Shenoi; Daehyung Park; James M. Rehg; Charles C. Kemp

We consider the problem of enabling a robot to efficiently obtain a dense haptic map of its visible surroundings using the complementary properties of vision and tactile sensing. Our approach assumes that visible surfaces that look similar to one another are likely to have similar haptic properties. We present an iterative algorithm that enables a robot to infer dense haptic labels across visible surfaces when given a color-plus-depth (RGB-D) image along with a sequence of sparse haptic labels representative of what could be obtained via tactile sensing. Our method uses a color-based similarity measure and connected components on color and depth data. We evaluated our method using several publicly available RGBD image datasets with indoor cluttered scenes pertinent to robot manipulation. We analyzed the effects of algorithm parameters and environment variation, specifically the level of clutter and the type of setting, like a shelf, table top, or sink area. In these trials, the visible surface for each object consisted of an average of 8602 pixels, and we provided the algorithm with a sequence of haptically-labeled pixels up to a maximum of 40 times the number of objects in the image. On average, our algorithm correctly assigned haptic labels to 76.02% of all of the object pixels in the image given this full sequence of labels. We also performed experiments with the humanoid robot DARCI reaching in a cluttered foliage environment while using our algorithm to create a haptic map. Doing so enabled the robot to reach goal locations using a single plan after a single greedy reach, while our previous tactile-only mapping method required 5 or more plans to reach each goal.


intelligent robots and systems | 2014

Learning to Reach into the Unknown: Selecting Initial Conditions When Reaching in Clutter

Daehyung Park; Ariel Kapusta; You Keun Kim; James M. Rehg; Charles C. Kemp

Often in highly-cluttered environments, a robot can observe the exterior of the environment with ease, but cannot directly view nor easily infer its detailed internal structure (e.g., dense foliage or a full refrigerator shelf). We present a data-driven approach that greatly improves a robots success at reaching to a goal location in the unknown interior of an environment based on observable external properties, such as the category of the clutter and the locations of openings into the clutter (i.e., apertures). We focus on the problem of selecting a good initial configuration for a manipulator when reaching with a greedy controller. We use density estimation to model the probability of a successful reach given an initial condition and then perform constrained optimization to find an initial condition with the highest estimated probability of success. We evaluate our approach with two simulated robots reaching in clutter, and provide a demonstration with a real PR2 robot reaching to locations through random apertures. In our evaluations, our approach significantly outperformed two alternative approaches when making two consecutive reach attempts to goals in distinct categories of unknown clutter. Our approach only uses sparse readily-apparent features.


ieee-ras international conference on humanoid robots | 2014

Interleaving planning and control for efficient haptically-guided reaching in unknown environments

Daehyung Park; Ariel Kapusta; Jeffrey Hawke; Charles C. Kemp

We present a new method for reaching in an initially unknown environment with only haptic sensing. In this paper, we propose a haptically-guided interleaving planning and control (HIPC) method with a haptic mapping framework. HIPC runs two planning methods, interleaving a task-space and a joint-space planner, to provide fast reaching performance. It continually replans a valid trajectory, alternating between planners and quickly reflecting collected tactile information from an unknown environment. One key idea is that tactile sensing can be used to directly map an immediate cause of interference when reaching. The mapping framework efficiently assigns raw tactile information from whole-arm tactile sensors into a 3D voxel-based collision map. Our method uses a previously published contact-regulating controller based on model predictive control (MPC). In our evaluation with a physics simulation of a humanoid robot, interleaving was superior at reaching in the 9 types of environments we used.


intelligent robots and systems | 2015

Task-centric selection of robot and environment initial configurations for assistive tasks

Ariel Kapusta; Daehyung Park; Charles C. Kemp

When a mobile manipulator functions as an assistive device, the robots initial configuration and the configuration of the environment can impact the robots ability to provide effective assistance. Selecting initial configurations for assistive tasks can be challenging due to the high number of degrees of freedom of the robot, the environment, and the person, as well as the complexity of the task. In addition, rapid selection of initial conditions can be important, so that the system will be responsive to the user and will not require the user to wait a long time while the robot makes a decision. To address these challenges, we present Task-centric initial Configuration Selection (TCS), which unlike previous work uses a measure of task-centric manipulability to accommodate state estimation error, considers various environmental degrees of freedom, and can find a set of configurations from which a robot can perform a task. TCS performs substantial offline computation, so that it can rapidly provide solutions at run time. At run time, the system performs an optimization over candidate initial configurations using a utility function that can include factors such as movement costs for the robots mobile base. To evaluate TCS, we created models of 11 activities of daily living (ADLs) and evaluated TCSs performance with these 11 assistive tasks in a computer simulation of a PR2, a robotic bed, and a model of a human body. TCS performed as well or better than a baseline algorithm in all of our tests against state estimation error.


Archive | 2014

A Robotic System for Reaching in Dense Clutter that Integrates Model Predictive Control, Learning, Haptic Mapping, and Planning

Tapomayukh Bhattacharjee; Phillip M. Grice; Ariel Kapusta; Marc D. Killpack; Daehyung Park; Charles C. Kemp


arXiv: Robotics | 2016

Towards Assistive Feeding with a General-Purpose Mobile Manipulator

Daehyung Park; You Keun Kim; Zackory M. Erickson; Charles C. Kemp


Autonomous Robots | 2018

Multimodal anomaly detection for assistive robots

Daehyung Park; Hokeun Kim; Charles C. Kemp


international conference on robotics and automation | 2018

A Multimodal Anomaly Detector for Robot-Assisted Feeding Using an LSTM-Based Variational Autoencoder

Daehyung Park; Yuuna Hoshi; Charles C. Kemp


arXiv: Robotics | 2018

3D Human Pose Estimation on a Configurable Bed from a Pressure Image

Henry M. Clever; Ariel Kapusta; Daehyung Park; Zackory M. Erickson; Yash Chitalia; Charles C. Kemp

Collaboration


Dive into the Daehyung Park's collaboration.

Top Co-Authors

Avatar

Charles C. Kemp

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ariel Kapusta

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zackory M. Erickson

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tapomayukh Bhattacharjee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yash Chitalia

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Henry M. Clever

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hokeun Kim

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James M. Rehg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

You Keun Kim

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuuna Hoshi

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge