Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rodney Dockter is active.

Publication


Featured researches published by Rodney Dockter.


international conference on robotics and automation | 2015

Practical, stretchable smart skin sensors for contact-aware robots in safe and collaborative interactions

John J. O'Neill; Jason Lu; Rodney Dockter; Timothy M. Kowalewski

Safe, intuitive human-robot interaction requires that robots intelligently interface with their environments, ideally sensing and localizing physical contact across their link surfaces. We introduce a stretchable smart skin sensor that provides this function. Stretchability allows it to conform to arbitrary robotic link surfaces. It senses contact over nearly the entire surface, localizes contact position of a typical finger touch continuously over its entire surface (RMSE = 7.02mm for a 14.7cm×14.7cm area), and provides an estimate of the contact force. Our approach exclusively employs stretchable, flexible materials resulting in skin strains of up to 150%. We exploit novel carbon nanotube elastomers to create a two-dimensional potentiometer surface. Finite element simulations validate a simplified polynomial surface model to enable real-time processing on a basic microcontroller with no supporting electronics. Using only five electrodes, the skin can be scaled up to arbitrary sizes without needing additional electrodes. We designed, implemented, calibrated, and tested a prototype smart skin as a tactile sensor on a custom medical robot for sensing unexpected physical interactions. We experimentally demonstrate its utility in collaborative robotic applications by showing its potential to enable safer, more intuitive human-robot interaction.


intelligent robots and systems | 2014

A fast, low-cost, computer vision approach for tracking surgical tools

Rodney Dockter; Robert M. Sweet; Timothy M. Kowalewski

Given the rise in surgeries performed with surgical robots and associated robotics research efforts, tool tracking methods have the potential to provide quantitative feedback concerning surgical performance and establish absolute tool tracking to help advance surgical robotics research. We have created a platform-agnostic method for low-cost tracking of surgical tool shafts in Cartesian space in near real time. We employ a joint Hough Transform - Geometric Constraint approach to locate the tool tips in the stereo camera channels independently. Cartesian coordinates are registered using a custom polynomial depth - disparity model. The algorithm was developed using a low-cost experimental webcam setup and evaluated using a da Vinci surgical endoscope. The algorithm was benchmarked for 3D tracking accuracy and computational speed. The system can locate the tool tip in 3D space with an average accuracy of 3.05 mm at 25.86 frames per second using the webcam setup. For the endoscope setup this algorithm has an average tracking accuracy of 8.68 mm in 3D and 1.88 mm in 2D with an average frame rate of 26.9 FPS. The algorithm also demonstrated successful tracking of tools using captured video from a real surgical procedure.


Journal of Medical Devices-transactions of The Asme | 2013

A Low-Cost Computer Vision Based Approach for Tracking Surgical Robotic Tools

Rodney Dockter; Timothy M. Kowalewski

The number of Robot-Assisted Minimally Invasive Surgery (RMIS) procedures has grown immensely in recent years. The number of surgeries performed with the da Vinci (Intuitive Surgical, Sunnyvale, CA) worldwide in 2005 was under 50,000. This number grew to more than 350,000 by 2011 [1]. RMIS procedures provide improved patient recovery time and reduced trauma due to smaller incisions relative to traditional open procedures. Given the rise in RMIS procedures, several organizations and companies have made efforts to develop training and certification criteria for the da Vinci robot. Mimic technologies (Mimic, Seattle, WA) and Intuitive Surgical both produce virtual reality (VR) training simulators based on the da Vinci system. The Fundamentals of Robotic Surgery (FRS) consortium has been created with the goal of developing a standardized, high-stakes certification exam for robotic surgery [1]. While still in the development stage, this exam will consist of seven tasks carried out on a small physical module with an actual RMIS system. Each task is evaluated with a certain set of criteria including completion time, total tool path length and economy of motion, which is a measurement of deviation from an ‘ideal’ path. All of these metrics can benefit greatly from an accurate, inexpensive and modular tool tracking system that requires no modification to the existing robot. While the da Vinci uses joint kinematics to calculate the tool tip position and movement internally, this data is not openly available to users. Even if this data was open to researchers, the accuracy of kinematic calculations of end effector position suffers from compliance in the joints and links of the robot as well as finite uncertainties in the sensors. In order to find an accurate, available and low-cost alternative to tool tip localization, we have developed a computer vision based design for surgical tool tracking. Vision systems have the added benefit of being low cost with typical high resolution webcams costing around


computer assisted radiology and surgery | 2018

Blended shared control utilizing online identification

Trevor K. Stephens; Nathan J. Kong; Rodney Dockter; John J. O’Neill; Robert M. Sweet; Timothy M. Kowalewski

50. The stereo setup for this design cost around


intelligent robots and systems | 2017

3D bioprinting directly onto moving human anatomy

John J. O'Neill; Reed A. Johnson; Rodney Dockter; Timothy M. Kowalewski

120. Chmarra et al. reviewed the available non-robotic, laparoscopic tracking devices in 2007 and discussed 4 main technologies for tracking; mechanical, visual, ultrasonic, and electromagnetic [2]. From this work, it became apparent that in order to track robotic tools, only the visual or ultrasound-based methods would be feasible. The goal of our research was to develop a tracking system which could accompany the FRS module or be used separately during real procedures to gauge performance post-procedure using the da Vinci camera feed. The system was designed to use only a camera setup and a computer loaded with the computer vision software. Given the bandwidth of surgical tool motions has been experimentally determined as falling below 8 Hz [3], we adopt 16 Hz, or frames per second (FPS), as a minimal frame rate required to accurately capture surgical tool motion data. 2 Methods


computer assisted radiology and surgery | 2017

The minimally acceptable classification criterion for surgical skill: intent vectors and separability of raw motion data

Rodney Dockter; Thomas S. Lendvay; Robert M. Sweet; Timothy M. Kowalewski

PurposeSurgical robots are increasingly common, yet routine tasks such as tissue grasping remain potentially harmful with high occurrences of tissue crush injury due to the lack of force feedback from the grasper. This work aims to investigate whether a blended shared control framework which utilizes real-time identification of the object being grasped as part of the feedback may help address the prevalence of tissue crush injury in robotic surgeries.MethodsThis work tests the proposed shared control framework and tissue identification algorithm on a custom surrogate surgical robotic grasping setup. This scheme utilizes identification of the object being grasped as part of the feedback to regulate to a desired force. The blended shared control is arbitrated between human and an implicit force controller based on a computed confidence in the identification of the grasped object. The online identification is performed using least squares based on a nonlinear tissue model. Testing was performed on five silicone tissue surrogates. Twenty grasps were conducted, with half of the grasps performed under manual control and half of the grasps performed with the proposed blended shared control, to test the efficacy of the control scheme.ResultsThe identification method resulted in an average of 95% accuracy across all time samples of all tissue grasps using a full leave-grasp-out cross-validation. There was an average convergence time of


Frontiers in Biomedical Devices, BIOMED - 2017 Design of Medical Devices Conference, DMD 2017 | 2017

Toward Inkjet Additive Manufacturing Directly Onto Human Anatomy

Reed A. Johnson; John J. O’Neill; Rodney Dockter; Timothy M. Kowalewski


Sensors | 2018

Stretchable, Flexible, Scalable Smart Skin Sensors for Robotic Position and Force Estimation

John J. O’Neill; Jason Lu; Rodney Dockter; Timothy M. Kowalewski

8.1 \pm 6.3


EuroVis 2017 - Short Papers | 2017

Trajectory Mapper: Interactive Widgets and Artist-Designed Encodings for Visualizing Multivariate Trajectory Data

Devin Lange; Francesca Samsel; Ioannis Karamouzas; Stephen J. Guy; Rodney Dockter; Timothy M. Kowalewski; Daniel F. Keefe


Journal of Medical Devices-transactions of The Asme | 2014

A Framework for Calibrating and Benchmarking Computer Vision Algorithms in Surgical Robotics

Rodney Dockter; Timothy M. Kowalewski

8.1±6.3 ms across all training grasps for all tissue surrogates. Additionally, there was a reduction in peak forces induced during grasping for all tissue surrogates when applying blended shared control online.ConclusionThe blended shared control using online identification more successfully regulated grasping forces to the desired target force when compared with manual control. The preliminary work on this surrogate setup for surgical grasping merits further investigation on real surgical tools and with real human tissues.

Collaboration


Dive into the Rodney Dockter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Lu

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge