Ariel Kapusta
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ariel Kapusta.
ieee-ras international conference on humanoid robots | 2013
Tapomayukh Bhattacharjee; Ariel Kapusta; James M. Rehg; Charles C. Kemp
We demonstrate that data-driven methods can be used to rapidly categorize objects encountered through incidental contact on a robot arm. Allowing incidental contact with surrounding objects has benefits during manipulation such as increasing the workspace during reaching tasks. The information obtained from such contact, if available online, can potentially be used to map the environment and help in manipulation tasks. In this paper, we address this problem of online categorization using incidental contact during goal-oriented motion. In cluttered environments, the detailed internal structure of clutter can be difficult to infer, but the environment type is often apparent. In a randomized cluttered environment of known object types and “outliers”, our approach uses Hidden Markov Models to capture the dynamic robot-environment interactions and to categorize objects based on the interactions. We combined leaf and trunk objects to create artificial foliage as a test environment. We collected data using a skin-sensor on the robots forearm while it reached into clutter. Our algorithm classifies the objects rapidly with low computation time and few data-samples. Using a taxel-by-taxel classification approach, we can successfully categorize simultaneous contacts with multiple objects and can also identify outlier objects in the environment based on the prior associated with an objects likelihood in the given environment.
Autonomous Robots | 2016
Marc D. Killpack; Ariel Kapusta; Charles C. Kemp
A key challenge for haptically reaching in dense clutter is the frequent contact that can occur between the robot’s arm and the environment. We have previously used single-time-step model predictive control (MPC) to enable a robot to slowly reach into dense clutter using a quasistatic mechanical model. Rapid reaching in clutter would be desirable, but entails additional challenges due to dynamic phenomena that can lead to higher forces from impacts and other types of contact. In this paper, we present a multi-time-step MPC formulation that enables a robot to rapidly reach a target position in dense clutter, while regulating whole-body contact forces to be below a given threshold. Our controller models the dynamics of the arm in contact with the environment in order to predict how contact forces will change and how the robot’s end effector will move. It also models how joint velocities will influence potential impact forces. At each time step, our controller uses linear models to generate a convex optimization problem that it can solve efficiently. Through tens of thousands of trials in simulation, we show that with our dynamic MPC a simulated robot can, on average, reach goals 1.4 to 2 times faster than our previous controller, while attaining comparable success rates and fewer occurrences of high forces. We also conducted trials using a real 7 degree-of-freedom (DoF) humanoid robot arm with whole-arm tactile sensing. Our controller enabled the robot to rapidly reach target positions in dense artificial foliage while keeping contact forces low.
intelligent robots and systems | 2014
Daehyung Park; Ariel Kapusta; You Keun Kim; James M. Rehg; Charles C. Kemp
Often in highly-cluttered environments, a robot can observe the exterior of the environment with ease, but cannot directly view nor easily infer its detailed internal structure (e.g., dense foliage or a full refrigerator shelf). We present a data-driven approach that greatly improves a robots success at reaching to a goal location in the unknown interior of an environment based on observable external properties, such as the category of the clutter and the locations of openings into the clutter (i.e., apertures). We focus on the problem of selecting a good initial configuration for a manipulator when reaching with a greedy controller. We use density estimation to model the probability of a successful reach given an initial condition and then perform constrained optimization to find an initial condition with the highest estimated probability of success. We evaluate our approach with two simulated robots reaching in clutter, and provide a demonstration with a real PR2 robot reaching to locations through random apertures. In our evaluations, our approach significantly outperformed two alternative approaches when making two consecutive reach attempts to goals in distinct categories of unknown clutter. Our approach only uses sparse readily-apparent features.
international conference on robotics and automation | 2017
Wenhao Yu; Ariel Kapusta; Jie Tan; Charles C. Kemp; Greg Turk; C. Karen Liu
There is a considerable need for assistive dressing among people with disabilities, and robots have the potential to fulfill this need. However, training such a robot would require extensive trials in order to learn the skills of assistive dressing. Such training would be time-consuming and require considerable effort to recruit participants and conduct trials. In addition, for some cases that might cause injury to the person being dressed, it is impractical and unethical to perform such trials. In this work, we focus on a representative dressing task of pulling the sleeve of a hospital gown onto a persons arm. We present a system that learns a haptic classifier for the outcome of the task given few (2–3) real-world trials with one person. Our system first optimizes the parameters of a physics simulator using real-world data. Using the optimized simulator, the system then simulates more haptic sensory data with noise models that account for randomness in the experiment. We then train hidden Markov Models (HMMs) on the simulated haptic data. The trained HMMs can then be used to classify and predict the outcome of the assistive dressing task based on haptic signals measured by a real robots end effector. This system achieves 92.83% accuracy in classifying the outcome of the robot-assisted dressing task with people not included in simulation optimization. We compare our classifiers to those trained on real-world data. We show that the classifiers from our system can categorize the dressing task outcomes more accurately than classifiers trained on ten times more real data.
ieee-ras international conference on humanoid robots | 2014
Daehyung Park; Ariel Kapusta; Jeffrey Hawke; Charles C. Kemp
We present a new method for reaching in an initially unknown environment with only haptic sensing. In this paper, we propose a haptically-guided interleaving planning and control (HIPC) method with a haptic mapping framework. HIPC runs two planning methods, interleaving a task-space and a joint-space planner, to provide fast reaching performance. It continually replans a valid trajectory, alternating between planners and quickly reflecting collected tactile information from an unknown environment. One key idea is that tactile sensing can be used to directly map an immediate cause of interference when reaching. The mapping framework efficiently assigns raw tactile information from whole-arm tactile sensors into a 3D voxel-based collision map. Our method uses a previously published contact-regulating controller based on model predictive control (MPC). In our evaluation with a physics simulation of a humanoid robot, interleaving was superior at reaching in the 9 types of environments we used.
robot and human interactive communication | 2016
Ariel Kapusta; Patrick Beeson
Many existing person tracking systems are challenged by non-laboratory scenarios, including variable lighting conditions, rain, smoke, tracking distance, and tracking speed. We provide evidence that by using a 3D thermal sensor, a person can be tracked in three dimensions with high success using very simple tracking methods, in many of the challenging lighting conditions and other weather conditions that confound other systems. In support of our claim, we present the PROWL (Perception for Robotic Operation over Widespread Lighting) sensor system, which uses thermal stereo image processing and on-board sensor processing to perform person tracking and gesture recognition. PROWL, using only ICP-based point matching algorithms, obtains 100% person tracking success at 20 frames per second out to 13 meters and zero false-positive/false-negative gesture recognition within 7 meters in all tested scenarios, which includes a sunny outdoor environment, a nighttime outdoor environment, a blackout indoor environment, and a whiteout smoke-filled indoor environment.
intelligent robots and systems | 2015
Ariel Kapusta; Daehyung Park; Charles C. Kemp
When a mobile manipulator functions as an assistive device, the robots initial configuration and the configuration of the environment can impact the robots ability to provide effective assistance. Selecting initial configurations for assistive tasks can be challenging due to the high number of degrees of freedom of the robot, the environment, and the person, as well as the complexity of the task. In addition, rapid selection of initial conditions can be important, so that the system will be responsive to the user and will not require the user to wait a long time while the robot makes a decision. To address these challenges, we present Task-centric initial Configuration Selection (TCS), which unlike previous work uses a measure of task-centric manipulability to accommodate state estimation error, considers various environmental degrees of freedom, and can find a set of configurations from which a robot can perform a task. TCS performs substantial offline computation, so that it can rapidly provide solutions at run time. At run time, the system performs an optimization over candidate initial configurations using a utility function that can include factors such as movement costs for the robots mobile base. To evaluate TCS, we created models of 11 activities of daily living (ADLs) and evaluated TCSs performance with these 11 assistive tasks in a computer simulation of a PR2, a robotic bed, and a model of a human body. TCS performed as well or better than a baseline algorithm in all of our tests against state estimation error.
robot and human interactive communication | 2016
Ariel Kapusta; Wenhao Yu; Tapomayukh Bhattacharjee; C. Karen Liu; Greg Turk; Charles C. Kemp
Archive | 2014
Tapomayukh Bhattacharjee; Phillip M. Grice; Ariel Kapusta; Marc D. Killpack; Daehyung Park; Charles C. Kemp
international conference on robotics and automation | 2018
Zackory M. Erickson; Maggie Collier; Ariel Kapusta; Charles C. Kemp