Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon Leonard is active.

Publication


Featured researches published by Simon Leonard.


Science Translational Medicine | 2016

Supervised autonomous robotic soft tissue surgery

Azad Shademan; Ryan Decker; Justin Opfermann; Simon Leonard; Axel Krieger; Peter C.W. Kim

Supervised autonomous in vivo robotic surgery is possible on soft tissues and outperforms standard clinical techniques in a dynamic preclinical environment. Hands-free The operating room may someday be run by robots, with surgeons overseeing their moves. Shademan et al. designed a “Smart Tissue Autonomous Robot,” or STAR, which consists of tools for suturing as well as fluorescent and 3D imaging, force sensing, and submillimeter positioning. With all of these components, the authors were able to use STAR for soft tissue surgery—a difficult task for a robot given tissue deformity and mobility. Surgeons tested STAR against manual surgery, laparoscopy, and robot-assisted surgery for porcine intestinal anastomosis, and found that the supervised autonomous surgery offered by the STAR system was superior. The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon’s manual capability. Autonomous robotic surgery—removing the surgeon’s hands—promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis—including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses—between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques.


IEEE Transactions on Robotics | 2010

Path Planning for Improved Visibility Using a Probabilistic Road Map

Matthew A. Baumann; Simon Leonard; Elizabeth A. Croft; James J. Little

This paper focuses on the challenges of vision-based motion planning for industrial manipulators. Our approach is aimed at planning paths that are within the sensing and actuation limits of industrial hardware and software. Building on recent advances in path planning, our planner augments probabilistic road maps with vision-based constraints. The resulting planner finds collision-free paths that simultaneously avoid occlusions of an image target and keep the target within the field of view of the camera. The planner can be applied to eye-in-hand visual-target-tracking tasks for manipulators that use point-to-point commands with interpolated joint motion.


IEEE Transactions on Biomedical Engineering | 2014

Smart Tissue Anastomosis Robot (STAR): A Vision-Guided Robotics System for Laparoscopic Suturing

Simon Leonard; Kyle Wu; Yonjae Kim; Axel Krieger; Peter C.W. Kim

This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360®, and nine times faster than surgeons using manual laparoscopic tools.


intelligent robots and systems | 2012

Augmented reality environment with virtual fixtures for robotic telemanipulation in space

Tian Xia; Simon Leonard; Anton Deguet; Louis L. Whitcomb; Peter Kazanzides

This paper presents an augmented reality framework, implemented on the master console of a modified da Vinci® surgical robot, that enables the operator to design and implement assistive virtual fixtures during teleoperation. Our specific goal is to facilitate teleoperation with large time delays, such as the delay of several seconds that occurs with ground-based control of robotic systems in earth orbit. The virtual fixtures give immediate visual feedback and motion guidance to the operator, while the remote slave performs motions consistent with those constraints. This approach is suitable for tasks in unstructured environments, such as servicing of existing on-orbit spacecraft that were not designed for servicing. We conducted a pilot study by teleoperating a remote slave robot for a thermal barrier blanket cutting task using virtual fixtures with and without time delay. The results show that virtual fixtures reduce the time required to complete the task while also eliminating significant manipulation errors, such as tearing the blanket. The improvement in performance is especially dramatic when a simulated time delay (4 seconds) is introduced.


international conference on robotics and automation | 2008

Dynamic visibility checking for vision-based motion planning

Simon Leonard; Elizabeth A. Croft; James J. Little

An important problem in position-based visual servoing (PBVS) is to guarantee that a target will remain within the field of view for the duration of the task. In this paper, we propose a dynamic visibility checking algorithm that, given a parametrized trajectory of the camera, determines if an arbitrary 3D target will remain within the field of view. We reformulate this problem as the problem of determining if the 3D coordinates of the target collide with the frustum formed by the camera field of view during the camera trajectory. To solve this problem, our algorithm computes and compares the shortest distance between the target and the frustum with the length of the trajectory described by the target in the cameras coordinate frame. Furthermore, we demonstrate that our algorithm can be combined with path planning algorithms and, in particular, probabilistic roadmaps (PRM). Results suggest that our algorithm is computationally efficient even when the target moves in the vicinity of image borders. In simulations, we use our dynamic visibility checking algorithm in conjunction with a PRM to plan collision free paths while providing the guarantee that a specific target will not leave the field of view.


international conference on robotics and automation | 2013

Model-based telerobotic control with virtual fixtures for satellite servicing tasks

Tian Xia; Simon Leonard; Isha Kandaswamy; Amy A. Blank; Louis L. Whitcomb; Peter Kazanzides

Our goal is to develop new methods for telerobotic on-orbit servicing of spacecraft under ground-based supervisory control of human operators to perform tasks in the presence of uncertainty and telemetry time delay of several seconds. We propose a new delay tolerant control methodology, using virtual fixtures, hybrid position/force control, task frame formalism, and environment modeling, that is robust to modeling and registration errors. The task model is represented by graphical primitives and virtual fixtures on the teleoperation master and by a hybrid position/force controller on the slave robot. The virtual fixtures guide the operator through a model-based simulation of the task, and the goal of the slave controller is to reproduce this action (after a few seconds of delay) or, if measurements are not consistent with the models, to stop motion and alert the operator. This approach is suitable for tasks in unstructured environments, such as servicing of existing on-orbit spacecraft that were not designed for servicing. We introduce the overall control concept, its main components, and an example application in which the remote slave robot cuts the tape that secures a flap of multi-layer insulation over the access panel of a satellite mockup.


intelligent robots and systems | 2008

Occlusion-free path planning with a probabilistic roadmap

Matthew A. Baumann; Donna C. Dupuis; Simon Leonard; Elizabeth A. Croft; James J. Little

We present a novel algorithm for path planning that avoids occlusions of a visual target for an ldquoeye-in-handrdquo sensor on an articulated robot arm. We compute paths using a probabilistic roadmap to avoid collisions between the robot and obstacles, while penalizing trajectories that do not maintain line-of-sight. The system determines the space from which line-of-sight is unimpeded to the target (the visible region). We assign penalties to trajectories within the roadmap proportional to the distance the camera travels while outside the visible region. Using Dijkstrapsilas algorithm, we compute paths of minimal occlusion (maximal visibility) through the roadmap. In our experiments, we compare a shortest-distance path to the minimal-occlusion path and discuss the impact of the improved visibility.


Proceedings of SPIE | 2013

Feasibility of near-infrared markers for guiding surgical robots

Azad Shademan; Matthieu F. Dumont; Simon Leonard; Axel Krieger; Peter C.W. Kim

Automating surgery using robots requires robust visual tracking. The surgical environment often has poor light conditions where several organs have similar visual appearances. In addition, the field of view might be occluded by blood or tissue. In this paper, the feasibility of near-infrared (NIR) fluorescent marking and imaging for vision-based robot control is studied. The NIR region of the spectrum has several useful properties including deep tissue penetration. We study the optical properties of a clinically-approved NIR fluorescent dye, indocyanine green (ICG), with different concentrations and quantify image positioning error of ICG marker when obstructed by tissue.


ieee international workshop on haptic audio visual environments and games | 2009

Measuring intent in human-robot cooperative manipulation

Davide De Carli; Evan Hohert; Chris A. C. Parker; Susana Zoghbi; Simon Leonard; Elizabeth A. Croft; Antonio Bicchi

To effectively interact with people in a physically assistive role, robots will need to be able to cooperatively manipulate objects with a human partner. For example, it can be very difficult for an individual to manipulate a long or heavy object. An assistant can help to share the load, and improve the maneuverability of the object. Each partner can communicate objectives (e.g., move around an obstacle or put the object down) via non-verbal cues (e.g., moving the end of the object in a particular direction, changing speed, or tugging). Herein, non-verbal communication in a human-robot coordinated manipulation task is addressed using a small articulated robot arm equipped with a 6-axis wrist mounted force/torque sensor and joint angle encoders. The robot controller uses a Jacobian Transpose velocity PD control scheme with gravity compensation. To aid collaborative manipulation we implement a uniform impedance controller at the robot end-effector with an attractive force to a virtual path in the style of a cobot. Unlike a cobot, this path is recomputed online as a function of user input. In our present research, we utilize force/torque sensor measurements to identify intentional user communications specifying a change in the task direction. We consider the impact of path recomputation and the resulting robot haptic feedback on user physiological response.


canadian conference on computer and robot vision | 2004

Approximating the visuomotor function for visual servoing

Simon Leonard; Martin Jagersand

This paper introduces a new approach to visual servoing by learning to perform tasks such as centering. The system uses function approximation from reinforcement learning to learn the visuomotor function of a task which relates actions to perceptual variations. The function model is linear and tile coding is used for generalization. The gradient-descent SARSA algorithm is used to determine the parameters. Experiments show that the system learns to center targets at different depths with stereo vision and fully reconfigures itself in monocular case.

Collaboration


Dive into the Simon Leonard's collaboration.

Top Co-Authors

Avatar

Peter C.W. Kim

Children's National Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin Opfermann

Children's National Medical Center

View shared research outputs
Top Co-Authors

Avatar

Austin Reiter

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Masaru Ishii

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ayushi Sinha

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Elizabeth A. Croft

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge