A Recurrent Neural Network Approach to Roll Estimation for Needle Steering
Maxwell Emerson, James M. Ferguson, Tayfun Efe Ertop, Margaret Rox, Josephine Granna, Michael Lester, Fabien Maldonado, Erin A. Gillaspie, Ron Alterovitz, Robert J. Webster III., Alan Kuntz
AA Recurrent Neural Network Approach toRoll Estimation for Needle Steering
Maxwell Emerson , James M. Ferguson , Tayfun Efe Ertop , Margaret Rox ,Josephine Granna , Michael Lester , Fabien Maldonado , Erin A. Gillaspie ,Ron Alterovitz , Robert J. Webster III , and Alan Kuntz Department of Mechanical Engineering, Vanderbilt University Department of Medicine and Thoracic Surgery,Vanderbilt University Medical Center Department of Computer Science, University of North Carolina at Chapel Hill Robotics Center and School of Computing, University of Utah
Steerable needles are a promising technology for delivering targeted therapies inthe body in a minimally invasive fashion via controlled, actively steered inser-tions. These robotically actuated needles usually leverage an asymmetric tip [1]to take curved paths through the body, avoiding anatomical obstacles and honingin on a target (see Fig. 1, left). Methods such as duty cycling and sliding modecontrol enable safe, accurate, and automatic controlled steering of these nee-dles to anatomical targets or along predetermined trajectories in the body [2,3].These controllers require knowledge of the full 6 degrees of freedom (DOF) poseof the steerable needle’s tip as it is steered through the body. To acquire thisinformation for feedback during control, electromagnetic sensors can be embed-ded in the tip of the needle. However, these sensors typically fill the internalworking channel of the needle, precluding the use of the needle for therapy de-livery. External sensors, such as ultrasound and bi-plane fluoroscopy (a type ofcontinuous X-ray) can sense the needle tip’s position and heading (5 DOF), butare not able to sense its axial orientation (roll angle) [4]. Without full 6-DOFstate measurement, an alternative method is needed to estimate the full state ofthe needle tip for effective steering.Model-based observer methods have been developed to estimate the orien-tation of the needle during steering [5–8]. These methods work well when thesystem behaves similarly to the modeled system they rely on. However, whenunmodeled effects dominate the system dynamics these methods can performpoorly. For flexible needles, effects such as unpredictable friction forces due totissue interactions and long needle lengths create nonlinear torsional dynamicsthat are difficult to accurately model [4–6].By contrast, model-free, data-driven approaches have recently been of greatresearch interest in state estimation during control in other domains [9]. How-ever the investigation of such methods has so-far been relatively limited inneedle steering, e.g., to predicting needle deflection behavior for set insertiondepths [10].In this work, we overcome the limitations present in model-based observersfor needle steering by presenting an estimator that learns the behavior of the a r X i v : . [ c s . R O ] J a n Emerson et. al.
Fig. 1.
Left:
Steerable needle inserted into a gelatin tissue simulant.
Left Inset:
Kine-matic diagram of the steerable needle model with un-modeled torsional compliance.This results in rotational lag between the base of the needle and tip of the needle.
Right:
Block diagram showing our LSTM-based estimator in the control loop. needle from a set of training insertions in a tissue phantom. Our method iscapable of accurately estimating the needle’s roll angle during new insertionsand in tissues that are different from those that it trained on.Using a model-free representation that learns the nonlinear effects from thetraining data, we achieve accurate state estimation that abstracts to multipletypes of tissue for a system that is subject to significant modeling errors.Here, we propose a Long-Short-Term Memory (LSTM)-based recurrent neu-ral network (RNN) [11] to estimate the needle’s roll angle during steering. OurLSTM-based network takes as input the sensed, partial state of the needle tipand recurrently estimates the needle’s unsensed roll angle at each time step.We use the network to estimate the needle state in a sliding mode controllerand demonstrate highly accurate steering in multiple mediums—including exvivo ovine brain and ex vivo porcine inflated lung—significantly outperforminga traditional Extended Kalman Filter (EKF) estimation method reliant on akinematic model that does not account for torsional effects in the system.The key contributions of this paper are (i) an accurate, model-free, learning-based method for steerable needle roll angle estimation and (ii) the integrationof the method into sliding mode control for highly accurate steering in multiplemediums, including ex vivo tissue that the model was not trained on. As such,our learned method can be trained in advance in gelatin using an internal sensorwhich can then be removed from the needle prior to the needle’s use duringclinical deployment in a patient. The needle can then be controlled using externalpartial sensing and our learned method. In this way, our learning-based methodovercomes a key limitation in needle steering, namely the requirement for bulky6-DOF sensors embedded in the needle itself during clinical deployment. Thisopens the door for external needle state sensing, enabling accurate tip orientationestimation (and subsequently safe and accurate needle steering) in a way thatdoes not interfere with the needle’s ability to deliver therapy to the patient.
We use a kinematic, non-holonomic needle model [1] to define the state of abevel-tip steerable needle as it moves through tissue. In this model, the needle’s
NN Approach to Roll Estimation for Needle Steering 3 behavior is parameterized by its forward motion, transmitted from actuationat its base; the plane in which its bevel lies and in which it curves, changedby rotating the needle at its base; and the curvature of the arc ( κ ) the needletakes as it is inserted. The control inputs of this model are u (cid:96) and u θ , needletip insertion velocity and angular velocity, respectively, as shown in Fig. 1, leftinset. Most models assume the needle is infinitely rigid in torsion such that u α , the angular control velocity applied at the needle’s base, is perfectly andimmediately applied to the needle’s tip. In reality, there is a lag in transmissionof the rotational velocity applied at the actuator to the needle tip, i.e. u α (cid:54) = u θ ,an effect that is particularly pronounced for long and/or highly flexible needles.Our method overcomes this limitation by learning to estimate the tip angle θ sothat the controller can accurately steer the needle.To do so, we propose a learning-based recurrent neural network with thefollowing layers: (i) input sequence layer (5 units), (ii) LSTM layer (30 units),(iii) fully-connected layer (30 units), and (iv) output regression layer (2 units).Our network takes as input the vector X = [ ˆ p ˆ η sin α cos α ] T , where ˆ p = (ˆ x ˆ y ˆ z ) is the 3-DOF sensor position, isotropically scaled by a predefinedmaximum workspace component (e.g., the needle’s maximum insertion length intissue) using min-max feature scaling. The 2-DOF sensor axis measurement ofthe needle tip is given by ˆ η = ( η x η y η z ), representing the needle’s heading (i.e.,its orientation but without the roll angle). The rotational actuator position atthe base of the needle is given by α = (cid:82) u α dt . The network then outputs the vec-tor Y = [sin ˜ θ cos ˜ θ ] T where ˜ θ is the estimated roll angle. We parameterize theinput and output roll angles via sin and cos, as these continuous representationsnicely bound the variables from [ − , θ , we integrate the net-work into a sliding mode controller [2], as shown in Fig. 1, right. ˜ θ is then appliedto the needle’s sensed heading ˆ η at each time step of the control loop which,in combination with the sensed position ˆ p , enables the controller to accuratelysteer the needle via the estimated knowledge of the needle tip’s full orientation. Data Collection and Network Training:
We implemented and evaluatedour learning-based method on a robotic needle steering system previously pre-sented in [12], designed to perform lung tumor biopsy through a bronchoscope.The steerable needle used was manufactured out of superelastic Nitinol measur-ing 1 .
24 mm OD, 1 . . X and Y , we performed target-ing insertions using a sliding mode controller [2] with a 6-DOF electromagnetic(EM) tracker embedded in the needle tip (Aurora NDI, Inc.). We collected adataset of 270 insertions in a tissue simulant of 10% gelatin (a tissue phantomfrequently used in the needle steering literature) [14]. Each insertion targeteda point sampled uniformly at random within the needle’s reachable workspacedefined by a cone with bounding curvature of 200 mm − , and along an insertion Emerson et. al.
Fig. 2.
Experimental dataset of 270 insertions in gelatin for network training.
Left:
Thetarget points.
Right:
The trajectories taken to the target points. Data is isotropicallyscaled based on z max , which we choose to be an insertion length of 75 mm. interval of 40 −
75 mm (Fig. 2, left). Every insertion used the controller param-eters λ = 5 mm / sec and λ = 2 π rad / sec, a 40 Hz controller rate, and achievedless than 1 mm targeting error (Fig. 2, right).We then normalized and partitioned the data into two subsets: training (240insertions) and validation (30 insertions). The network was trained using a root-mean-squared error loss function on an error defined over all timesteps in alltrajectories in the training dataset. It was trained on an Intel i9-7900X 3.3GHz10-core CPU with an NVIDIA Quadro P4000 GPU using ADAM optimiza-tion [15] and dropout regularization [16] to prevent over-fitting. The trainednetwork achieved a final RMSE of 0.0457 on the validation set. Online Estimation and Control in Gelatin and Ovine Brain:
Afterthe network was trained, we integrated it into the sliding mode controller toevaluate the system’s ability to leverage our method to steer accurately to tar-gets. We performed 30 new insertions in 10% gelatin to evaluate its performancein the tissue phantom in which it was trained and 10 new insertions in ovinebrain preserved in 4% Formalin (Carolina Biological Supply, Inc.) to evaluate itsperformance in biological tissue, a different medium and one in which it was not trained. The target points in both mediums were sampled uniformly at randomfrom a similar volume as described in the training sets. At each time step ourmethod estimated the orientation angle θ , which was combined with the sensedneedle axis and position, forming the full state vector used for sliding modecontrol.For comparison to a state-of-the-art method, we implemented an ExtendedKalman Filter [17], a model-based observer relying on the non-holonomic kine-matic needle model with no model of the torsional dynamics. The EKF hadknowledge of the same measured 5-DOF position and axis of the sensorized nee-dle tip and the control input velocities applied at the actuators. The EKF usedthe sensed information and kinematic needle model to estimate the state of theneedle throughout the insertion, its estimated state being incorporated into thesame sliding mode controller as our method.In Fig. 3, we show histograms of the angular error compared with the groundtruth measured by the sensor in the needle’s tip for each time step over allinsertions. The angular error is defined as Ω = arccos (( tr ( R ) − / R NN Approach to Roll Estimation for Needle Steering 5
Fig. 3.
Histograms of angular error over all insertions in gelatin and brain.
Fig. 4.
Targeting errors in gelatin and ovine brain.
Left:
Targeting errors from 10insertions in gelatin and 5 insertions in brain for our method (green) and the EKF(blue), with the straight lines depicting the average error for each.
Right:
An expandedview of the 0 to 2 mm range of the left figure. is the difference rotation relating the estimated orientation to the ground truthorientation. Our method has error values distributed much closer to zero thanthe EKF method, indicating superior estimation through each of the insertions.To demonstrate our method’s ability to control the needle to its intended tar-get, in Fig. 4 we show targeting errors (the Euclidean distance between the finalneedle tip position and the intended target) for each method in each medium.Our method achieved mean targeting errors of 0.43 mm in gelatin and 0.40 mm inbrain tissue, while the EKF method achieved mean targeting errors of 11.34 mmin gelatin and 6.12 mm in brain tissue.
Online Steering in
Ex Vivo
Porcine Lung
In addition to evaluatingthe estimator performance in gelatin and ovine brain, we performed a sequenceof online steers in statically-inflated ex vivo porcine lung (Animal Technologies,Inc.). The ex vivo lung was inflated and accessed using an 8.0 mm endotrachealtube (Smiths Medical ASD, Inc.). We placed custom 3D printed (Formlabs,Inc.) pre-calibrated fiducials on the lung surface with cyanoacrylate glue andused them for point-based registration of the EM tracker frame to the CT frame[18]. A preoperative CT scan was taken using a mobile ENT cone-beam CTscanner (xCAT Xoran Technologies). We loaded the scan into 3D Slicer [19] andmanually segmented the fiducial points (sphere centers) in the CT frame. Wethen manually thresholded the reconstructed CT data (0.4 mm isotropic voxelsize) to yield a segmentation of the lung anatomy, see Fig. 5. We then registeredthe segmented anatomy in the CT frame to the EM tracker frame using the EM-tracked fiducials mounted on the lung. We used this registered segmentation toinform the piercing sites and the intended target points.
Emerson et. al.
Fig. 5.
Rendered CT Scan Volume of the post-steered needle system using the learnedestimation method.
Left:
The fiducials are shown in the scan; each contains a 6DOF EMsensor and is glued to the surface of the lung, and is used for point-based registrationof the CT frame to the EM tracker frame.
Right:
The same thresholded scan showingthe segmented needle deployed post-steer.
Using a clinical bronchoscope, we navigated down to several airways in theleft lower lobe and pierced through the airway wall in each trial using a piercingstylet—a 0.9 mm OD superelastic Nitinol tube sharpened to a needle point.We inserted a nitinol tube, 1.5 mm OD, 1.3 mm ID, over the stylet to hold theopening in the airway wall. The piercing stylet was removed and exchanged forthe steerable needle.After loading the needle into the piercing site, we visualized the needle’strumpet-shaped reachable workspace with respect to the segmentation to iden-tify feasible target points for each steer that were collision-free with respect toblood vessels and other surrounding airways. This target point, specified in theCT scanner RAS (right, anterior, superior) coordinates, was transformed intothe EM tracker frame using the registration transform acquired from the fidu-cials. We fed the target point to the sliding mode controller, and the needle wassteered to the target with the estimated roll angle as input to the controller.A total of 8 trials were executed, with collision-free steers in 2/8, 1 with eachmethod. We determined the steer was collision-free by inspection of a CT scan ofthe post-steered needle, prior to retracting the needle back to its starting pose.It is important to note that while we attempted to pick target points in the CTscan that were not visibly in-collision with other parts of the airway and largevessels, we do not consider obstacle avoidance in this work and the steers wereperformed without consideration for obstacles en-route to the target point. Assuch, we limit this evaluation to the collision-free trials. In the collision-free trialof the learned estimator, the steer was accurate, achieving a targeting error of0.46 mm and 19.0 ◦ average angular error of the estimate, as measured in theEM tracker frame, not accounting for registration error (see Fig. 6). Conversely,the EKF performed poorly, consistent with our prior experimental results in theother tissues, with a target error of 17.6 mm and 156.7 ◦ average angular error. Arendered CT scan of the post-steered needle is shown for the learned estimatormethod trial in Fig. 5, right. NN Approach to Roll Estimation for Needle Steering 7
Fig. 6.
Time series of the estimate tracking the needle roll angle θ with angular error Ω from a steering trial in ex-vivo porcine lung using the learned estimator. The dashedline shows the mean angular error over the steer. In this work, we leveraged and validated the performance of a learned estimatorfor needle steering, paving the way for other learned methods in this application.Our results show that a learning method can accurately estimate the rollangle of a long, flexible steerable needle in the presence of torsional complianceand unmodeled frictional effects.Additionally, we show that a network trained on tissue simulant extrapolatesto actual tissue in the form of ex vivo ovine brain and porcine lung. It will be asubject of future research to investigate the application to other tissues such asliver and kidney, relevant tissues with applications for steerable needles.In the ex vivo lung experiments, the learned estimator performed well, thoughthere were steers which collided with anatomy. This further elucidates the needfor better registration methods, improved segmentation algorithms, and plannedtrajectories that consider these obstacles, instead of point targets. Future re-search will leverage our method when executing trajectories generated via mo-tion planning in these environments.Overall, we show that our learning-based method outperforms a model-basedobserver whose model does not capture the significant torsional dynamics. Wedemonstrate that neural network-based estimation can result in effective trackingof unsensed state variables, enabling accurate steering to point targets in gelatin,ovine brain, and porcine lung.
Acknowledgments
This research was supported in part by the NationalInstitutes of Health under award R01EB024864.
References
1. R. J. Webster III., J. S. Kim, N. J. Cowan, G. S. Chirikjian, and A. M. Okamura,“Nonholonomic Modeling of Needle Steering,”
IJRR , vol. 25, no. 5-6, pp. 509–525,2006. Emerson et. al.
2. D. C. Rucker, J. Das, H. B. Gilbert, P. J. Swaney, M. I. Miga, N. Sarkar, and R. J.Webster III., “Sliding Mode Control of Steerable Needles,”
IEEE TRO , vol. 29,no. 5, pp. 1289–1299, 2013.3. D. S. Minhas, J. A. Engh, M. M. Fenske, and C. N. Riviere, “Modeling of needlesteering via duty-cycled spinning,”
EMBS Proceedings , pp. 2756–2759, 2007.4. J. P. Swensen, M. Lin, A. M. Okamura, and N. J. Cowan, “Torsional dynamicsof steerable needles: Modeling and fluoroscopic guidance,”
IEEE TBME , vol. 61,no. 11, pp. 2707–2717, 2014.5. K. B. Reed, A. M. Okamura, and N. J. Cowan, “Modeling and control of needleswith torsional friction,”
IEEE TBME , vol. 56, no. 12, pp. 2905–2916, 2009.6. V. Kallem and N. J. Cowan, “Image guidance of flexible tip-steerable needles,”
IEEE TRO , vol. 25, no. 1, pp. 191–196, 2009.7. B. Fallahi, C. Rossa, R. Sloboda, N. Usmani, and M. Tavakoli, “Partial estimationof needle tip orientation in generalized coordinates in ultrasound image-guidedneedle insertion,” in
IEEE AIM , 2016, pp. 1604–1609.8. N. A. Wood, K. Shahrour, M. C. Ost, and C. N. Riviere, “Needle steering systemusing duty-cycled rotation for percutaneous kidney access,”
EMBS Proceedings ,pp. 5432–5435, 2010.9. S. Kuutti, R. Bowden, Y. Jin, P. Barber, and S. Fallah, “A survey of deep learningapplications to autonomous vehicle control,” arXiv , pp. 1–23, 2019.10. C. Avila-Carrasco, M. Ruppel, R. Persad, A. Bahl, and S. Dogramadzi, “Analyticalvs Data-driven Approach of Modelling Brachytherapy Needle Deflection,”
IEEETMRB , vol. 2, no. 4, pp. 519 – 528, 2020.11. K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, and J. Schmidhuber,“LSTM: A Search Space Odyssey,”
IEEE Trans. Neural Netw. Learn. Syst. , vol. 28,no. 10, pp. 2222–2232, 2017.12. S. Amack, M. Rox, J. Mitchell, T. Ertop, M. Emerson, A. Kuntz, F. Maldonado,J. Akulian, J. Gafford, R. Alterovitz, and R. Webster III, “Design and control of acompact modular robot for transbronchial lung biopsy,” in
SPIE Medical Imaging:Image-Guided Procedures, Robotic Interventions, and Modeling , 2019.13. M. Rox, M. Emerson, T. E. Ertop, I. Fried, M. Fu, J. Hoelscher, A. Kuntz,J. Granna, J. Mitchell, M. Lester, F. Maldonado, E. A. Gillaspie, J. A. Akulian,R. Alterovitz, and R. J. Webster, “Decoupling Steerability from Diameter: HelicalDovetail Laser Patterning for Steerable Needles,”
IEEE Access , 2020.14. P. J. Swaney, J. Burgner, H. B. Gilbert, and R. J. Webster III, “A flexure-based steerable needle: high curvature with reduced tissue damage,”
IEEE TBME ,vol. 60, no. 4, pp. 906–909, 2013.15. D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” , pp. 1–15, 2015.16. Z. Gal, Y., Ghahramani, “A Theoretically Grounded Application of Dropout inRecurrent Neural Networks,” in
NIPS Proceedings , 2016.17. H. M. Choset, S. Hutchinson, K. M. Lynch, G. Kantor, W. Burgard, L. E. Kavraki,and S. Thrun,
Principles of robot motion: theory, algorithms, and implementation .MIT press, 2005.18. J. M. Fitzpatrick, J. B. West, and C. R. Maurer, “Predicting Error in Rigid-BodyPoint-Based Registration,”
IEEE Trans. on Medical Imaging , vol. 17, no. 5, 1998.19. V. K. Kikinis R, Pieper SD, “3D Slicer: A Platform for Subject-Specific ImageAnalysis, Visualization, and Clinical Support,”