Dylan P. Losey
Rice University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dylan P. Losey.
IEEE Transactions on Robotics | 2016
Ali Utku Pehlivan; Dylan P. Losey; Marcia K. O'Malley
Robotic rehabilitation of the upper limb following neurological injury is most successful when subjects are engaged in the rehabilitation protocol. Developing assistive control strategies that maximize subject participation is accordingly an active area of research, with aims to promote neural plasticity and, in turn, increase the potential for recovery of motor coordination. Unfortunately, state-of-the-art control strategies either ignore more complex subject capabilities or assume underlying patterns govern subject behavior and may therefore intervene suboptimally. In this paper, we present a minimal assist-as-needed (mAAN) controller for upper limb rehabilitation robots. The controller employs sensorless force estimation to dynamically determine subject inputs without any underlying assumptions as to the nature of subject capabilities and computes a corresponding assistance torque with adjustable ultimate bounds on position error. Our adaptive input estimation scheme is shown to yield fast, stable, and accurate measurements regardless of subject interaction and exceeds the performance of current approaches that estimate only position-dependent force inputs from the user. Two additional algorithms are introduced in this paper to further promote active participation of subjects with varying degrees of impairment. First, a bound modification algorithm is described, which alters allowable error. Second, a decayed disturbance rejection algorithm is presented, which encourages subjects who are capable of leading the reference trajectory. The mAAN controller and accompanying algorithms are demonstrated experimentally with healthy subjects in the RiceWrist-S exoskeleton.
IEEE-ASME Transactions on Mechatronics | 2016
Dylan P. Losey; Andrew Erwin; Craig G. McDonald; Fabrizio Sergi; Marcia K. O'Malley
Robots are increasingly designed to physically interact with humans in unstructured environments, and as such must operate both accurately and safely. Leveraging compliant actuation, typically in the form of series elastic actuators (SEAs), can guarantee this required level of safety. To date, a number of frequency-domain techniques have been proposed, which yield effective SEA torque and impedance control; however, these methods are accompanied by undesirable stability constraints. In this paper, we instead focus on a time-domain approach to the control of SEAs, and adapt two existing control techniques for SEA platforms. First, a model reference adaptive controller is developed, which requires no prior knowledge of system parameters and can specify desired closed-loop torque characteristics. Second, the time-domain passivity approach is modified to control desired impedances in a manner that temporarily allows the SEA to passively render impedances greater than the actuators intrinsic stiffness. This approach also provides conditions for passivity when augmenting any stable SEA torque controller with an arbitrary impedance. The resultant techniques are experimentally validated on a custom prototype SEA.compliant actuation,frequency-domain techniquesundesirable stability constraints.
WSOM | 2016
Benjamin Kramer; Dylan P. Losey; Marcia K. O’Malley
An increase in the prevalence of endovascular surgery requires a growing number of proficient surgeons. Current endovascular surgeon evaluation techniques are subjective and time-consuming; as a result, there is a demand for an objective and automated evaluation procedure. Leveraging reliable movement metrics and tool-tip data acquisition, we here use neural network techniques such as LVQs and SOMs to identify the mapping between surgeons’ motion data and imposed rating scales. Using LVQs, only 50 % testing accuracy was achieved. SOM visualization of this inadequate generalization, however, highlights limitations of the present rating scale and sheds light upon the differences between traditional skill groupings and neural network clusters. In particular, our SOM clustering both exhibits more truthful segmentation and demonstrates which metrics are most indicative of surgeon ability, providing an outline for more rigorous evaluation strategies.
IEEE Transactions on Robotics | 2018
Dylan P. Losey; Marcia K. O’Malley
Robots are finding new applications where physical interaction with a human is necessary, such as manufacturing, healthcare, and social tasks. Accordingly, the field of physical human–robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that—despite provisions for the human to modify the robots current trajectory—the human cannot affect the robots future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real-time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the humans effort and improvement in the movement quality when compared to pHRI with impedance control alone.
Applied Mechanics Reviews | 2018
Dylan P. Losey; Craig G. McDonald; Edoardo Battaglia; Marcia K. O'Malley
As robotic devices are applied to problems beyond traditional manufacturing and industrial settings, we find that interaction between robots and humans, especially physical interaction, has become a fast developing field. Consider the application of robotics in healthcare, where we find telerobotic devices in the operating room facilitating dexterous surgical procedures, exoskeletons in the rehabilitation domain as walking aids and upper-limb movement assist devices, and even robotic limbs that are physically integrated with amputees who seek to restore their independence and mobility. In each of these scenarios, the physical coupling between human and robot, often termed physical human robot interaction (pHRI), facilitates new human performance capabilities and creates an opportunity to explore the sharing of task execution and control between humans and robots. In this review, we provide a unifying view of human and robot sharing task execution in scenarios where collaboration and cooperation between the two entities are necessary, and where the physical coupling of human and robot is a vital aspect. We define three key themes that emerge in these shared control scenarios, namely, intent detection, arbitration, and feedback. First, we explore methods for how the coupled pHRI system can detect what the human is trying to do, and how the physical coupling itself can be leveraged to detect intent. Second, once the human intent is known, we explore techniques for sharing and modulating control of the coupled system between robot and human operator. Finally, we survey methods for informing the human operator of the state of the coupled system, or the characteristics of the environment with which the pHRI system is interacting. At the conclusion of the survey, we present two case studies that exemplify shared control in pHRI systems, and specifically highlight the approaches used for the three key themes of intent detection, arbitration, and feedback for applications of upper limb robotic rehabilitation and haptic feedback from a robotic prosthesis for the upper limb. [DOI: 10.1115/1.4039145]
human-robot interaction | 2018
Andrea Bajcsy; Dylan P. Losey; Marcia K. O'Malley; Anca D. Dragan
We focus on learning robot objective functions from human guidance: specifically, from physical corrections provided by the person while the robot is acting. Objective functions are typically parametrized in terms of features, which capture aspects of the task that might be important. When the person intervenes to correct the robot»s behavior, the robot should update its understanding of which features matter, how much, and in what way. Unfortunately, real users do not provide optimal corrections that isolate exactly what the robot was doing wrong. Thus, when receiving a correction, it is difficult for the robot to determine which features the person meant to correct, and which features were changed unintentionally. In this paper, we propose to improve the efficiency of robot learning during physical interactions by reducing unintended learning. Our approach allows the human-robot team to focus on learning one feature at a time, unlike state-of-the-art techniques that update all features at once. We derive an online method for identifying the single feature which the human is trying to change during physical interaction, and experimentally compare this one-at-a-time approach to the all-at-once baseline in a user study. Our results suggest that users teaching one-at-a-time perform better, especially in tasks that require changing multiple features.
Applied Mechanics Reviews | 2018
Dylan P. Losey; Craig G. McDonald; Edoardo Battaglia; Marcia K. O'Malley
In their discussion article [1] on our review paper [2], Professors James Schmiedeler and Patrick Wensing have provided an insightful and informative perspective of the roles of intent detection, arbitration, and communication as three pillars of a framework for the implementation of shared control in physical human–robot interaction (pHRI). The authors both have significant expertise and experience in robotics, bipedal walking, and robotic rehabilitation. Their commentary introduces commonalities between the themes of the review paper and issues in locomotion with the aid of an exoskeleton or lower-limb prostheses, and presents several important topics that warrant further exploration. These include mechanical design as it pertains to the physical coupling between human and robot, modeling the human to improve intent detection and the arbitration of control, and finite-state machines as an approach for implementation. In this closure, we provide additional thoughts and discussion of these topics as they relate to pHRI. We agree that mechanical design is an important consideration when developing robots for tasks that involve shared control and physical human–robot interaction. In particular, robots that are working in close proximity to humans should be lightweight and compliant, so that—if an unexpected collision occurs—the human is not injured. Series elastic actuators (SEAs) have accordingly emerged as a desirable design element for pHRI. SEAs incorporate a compliant element between the actuator and the load, which beneficially reduces the robot’s output impedance across the frequency spectrum. Interestingly, the mechanical design of SEAs contributes to all three aspects of shared control. By measuring the displacement of the compliant element, we can use SEAs to determine and control how much force the human is applying, which lends itself to both intent detection and communication. Moreover, by changing the position of the actuator as the human interacts, we can adjust the perceived stiffness of the SEA: this allows us to adjust the arbitration between human and robot. Recently, our research group has focused on determining the range of stiffnesses that an SEA can safely render to a human user, as well as developing control strategies to augment this range for both feedback and arbitration [4,5]. In summary, SEAs are an example of effective mechanical design for shared control applications, and—by their nature—SEAs enable intent detection, arbitration, and communication for pHRI. Schmiedeler and Wensing bring up the importance of modeling, particularly as it is useful to intent detection and arbitration. We strongly agree and would like to emphasize here how physical modeling of the human, robot, and the interface between them can bring about improvements in the state-of-the art for almost all of the areas discussed. Model-based control of rigid robotic manipulators is foundational to the field and has been extended nicely to more flexible robots intended to work alongside humans [6]. As we experience a shift in wearable robotics from rigid to more soft and flexible designs, the challenges of physical modeling seem to expand at the pace of design innovation. Whether the application involves flexible cable-based actuation or soft pneumatic actuation, accurate and robust physical modeling of the actuation and mechanical design is a necessary step in our development of controlled physical interactions between human and robot that are safe, reliable, and effective. On the human side of modeling, new tools such as OPENSIM [7] facilitate modeling of the biomechanics of the musculoskeletal structure, opening up the black box that connects externally measured kinematic and kinetic data with internal muscle and joint loading and even individual muscle excitations as could be measured through electromyography. It is the authors’ hope that the field will find such models of the human neuromusculoskeletal system increasingly useful in detecting difficult-to-measure variables of human intent, experimentally validating existing model-based approaches to wearable robot design, and increasing the specificity of the regulation of human effort during arbitration for applications such as rehabilitation. Schmiedeler and Wensing have also pointed out that finite-state machines can be leveraged to detect the human’s intent or to change the arbitration during shared control. This is especially true when the human’s intent—or more generally, the human’s objective—belongs to a discrete set of possible objectives. In work by Javdani et al. [8], the human’s objective is a goal position, and the robot has a belief over the space of possible goals. As the human takes actions toward their desired goal, the robot updates its belief, and takes actions to maximize the robot’s expected reward. Later works considered the effects that robot actions can have on the human’s objective: if the human is willing to adapt, the robot can take actions to convince the human that their current goal is suboptimal, and then cause the human to switch to the optimal goal. Knowing that the human has a discrete set of possible intents makes these problems tractable and allows us to implement finitestate machines to learn the human’s intent in real time. Finite-state machines can also be used to switch between different levels of autonomy—but we must be careful to ensure that these changes do not destabilize the system. Although finite-state machines are a reasonable starting point, moving forward we expect that shared control systems for pHRI will increasingly work in continuous intent and arbitration spaces. For example, the human may be happy with the robot’s goal position, but unhappy with the robot’s trajectory Manuscript received January 23, 2018; final manuscript received January 23, 2018; published online February 20, 2018. Editor: Harry Dankowicz.
international conference on robotics and automation | 2017
Dylan P. Losey; Marcia K. O'Malley
Rigid haptic devices enable humans to physically interact with virtual environments, and the range of impedances that can be safely rendered using these rigid devices is quantified by the Z-Width metric. Series elastic actuators (SEAs) similarly modulate the impedance felt by the human operator when interacting with a robotic device, and, in particular, the robots perceived stiffness can be controlled by changing the elastic elements equilibrium position. In this paper, we explore the K-Width of SEAs, while specifically focusing on how discretization inherent in the computer-control architecture affects the systems passivity. We first propose a hybrid model for a single degree-of-freedom (DoF) SEA based on prior hybrid models for rigid haptic systems. Next, we derive a closed-form bound on the K-Width of SEAs that is a generalization of known constraints for both rigid haptic systems and continuous time SEA models. This bound is first derived under a continuous time approximation, and is then numerically supported with discrete time analysis. Finally, experimental results validate our finding that large pure masses are the most destabilizing operator in human-SEA interactions, and demonstrate the accuracy of our theoretical K-Width bound.
ieee international conference on biomedical robotics and biomechatronics | 2016
Dylan P. Losey; Laura H. Blumenschein; Marcia K. O'Malley
There has been significant research aimed at leveraging programmable robotic devices to provide haptic assistance or augmentation to a human user so that new motor skills can be trained efficiently and retained long after training has concluded. The success of these approaches has been varied, and retention of skill is typically not significantly better for groups exposed to these controllers during training. These findings point to a need to incorporate a more complete understanding of human motor learning principles when designing haptic interactions with the trainee. Reward-based reinforcement has been studied for its role in improving retention of skills. Haptic guidance, which assists a user to complete a task, and error augmentation, which exaggerates error in order to enhance feedback to the user, have been shown to be beneficial for training depending on the task difficulty, subject ability, and task type. In this paper, we combine the presentation of reward-based reinforcement with these robotic controllers to evaluate their impact on retention of motor skill in a visual rotation task with tunable difficulty using either fixed or moving targets. We found that with the reward-based feedback paradigm, both haptic guidance and error augmentation led to better retention of the desired visuomotor offset during a simple task, while during a more complex task, only subjects trained with haptic guidance demonstrated performance superior to those trained without a controller.
ieee international conference on biomedical robotics and biomechatronics | 2016
Dylan P. Losey; Craig G. McDonald; Marcia K. O'Malley
Many robots are composed of interchangeable modular components, each of which can be independently controlled, and collectively can be disassembled and reassembled into new configurations. When assembling these modules into an open kinematic chain, there are some discrete choices dictated by the module geometry; for example, the order in which the modules are placed, the axis of rotation of each module with respect to the previous module, and/or the overall shape of the assembled robot. Although it might be straightforward for a human user to provide this information, there is also a practical benefit in the robot autonomously identifying these unknown, discrete forward kinematics. To date, a variety of techniques have been proposed to identify unknown kinematics; however, these methods cannot be directly applied during situations where we seek to identify the correct model amid a discrete set of options. In this paper, we introduce a method specifically for finding discrete robot kinematics, which relies on collision detection, and is inspired by the biological concepts of body schema and evolutionary algorithms. Under the proposed method, the robot maintains a population of possible models, stochastically identifies a motion which best distinguishes those models, and then performs that motion while checking for a collision. Models which correctly predicted whether a collision would occur produce candidate models for the next iteration. Using this algorithm during simulations with a Baxter robot, we were able to correctly determine the order of the links in 84% of trials while exploring around 0.01% of all possible models, and we were able to correctly determine the axes of rotation in 94% of trials while exploring <; 0.1% of all possible models.