Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Narges Ahmidi is active.

Publication


Featured researches published by Narges Ahmidi.


medical image computing and computer assisted intervention | 2013

String Motif-Based Description of Tool Motion for Detecting Skill and Gestures in Robotic Surgery

Narges Ahmidi; Yixin Gao; Benjamín Béjar; S. Swaroop Vedula; Sanjeev Khudanpur; René Vidal; Gregory D. Hager

The growing availability of data from robotic and laparoscopic surgery has created new opportunities to investigate the modeling and assessment of surgical technical performance and skill. However, previously published methods for modeling and assessment have not proven to scale well to large and diverse data sets. In this paper, we describe a new approach for simultaneous detection of gestures and skill that can be generalized to different surgical tasks. It consists of two parts: (1) descriptive curve coding (DCC), which transforms the surgical tool motion trajectory into a coded string using accumulated Frenet frames, and (2) common string model (CSM), a classification model using a similarity metric computed from longest common string motifs. We apply DCC-CSM method to detect surgical gestures and skill levels in two kinematic datasets (collected from the da Vinci surgical robot). DCC-CSM method classifies gestures and skill with 87.81% and 91.12% accuracy, respectively.


IEEE Transactions on Biomedical Engineering | 2017

A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery

Narges Ahmidi; Lingling Tao; Shahin Sefati; Yixin Gao; Colin Lea; Benjamin Bejar Haro; Luca Zappella; Sanjeev Khudanpur; René Vidal; Gregory D. Hager

Objective: State-of-the-art techniques for surgical data analysis report promising results for automated skill assessment and action recognition. The contributions of many of these techniques, however, are limited to study-specific data and validation metrics, making assessment of progress across the field extremely challenging. Methods: In this paper, we address two major problems for surgical data analysis: First, lack of uniform-shared datasets and benchmarks, and second, lack of consistent validation processes. We address the former by presenting the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a public dataset that we have created to support comparative research benchmarking. JIGSAWS contains synchronized video and kinematic data from multiple performances of robotic surgical tasks by operators of varying skill. We address the latter by presenting a well-documented evaluation methodology and reporting results for six techniques for automated segmentation and classification of time-series data on JIGSAWS. These techniques comprise four temporal approaches for joint segmentation and classification: hidden Markov model, sparse hidden Markov model (HMM), Markov semi-Markov conditional random field, and skip-chain conditional random field; and two feature-based ones that aim to classify fixed segments: bag of spatiotemporal features and linear dynamical systems. Results: Most methods recognize gesture activities with approximately 80% overall accuracy under both leave-one-super-trial-out and leave-one-user-out cross-validation settings. Conclusion: Current methods show promising results on this shared dataset, but room for significant progress remains, particularly for consistent prediction of gesture activities across different surgeons. Significance: The results reported in this paper provide the first systematic and uniform evaluation of surgical activity recognition techniques on the benchmark database.


medical image computing and computer assisted intervention | 2016

Recognizing Surgical Activities with Recurrent Neural Networks

Robert S. DiPietro; Colin Lea; Anand Malpani; Narges Ahmidi; S. Swaroop Vedula; Gyusung I. Lee; Mija R. Lee; Gregory D. Hager

We apply recurrent neural networks to the task of recognizing surgical activities from robot kinematics. Prior work in this area focuses on recognizing short, low-level activities, or gestures, and has been based on variants of hidden Markov models and conditional random fields. In contrast, we work on recognizing both gestures and longer, higher-level activites, or maneuvers, and we model the mapping from kinematics to gestures/maneuvers with recurrent neural networks. To our knowledge, we are the first to apply recurrent neural networks to this task. Using a single model and a single set of hyperparameters, we match state-of-the-art performance for gesture recognition and advance state-of-the-art performance for maneuver recognition, in terms of both accuracy and edit distance. Code is available at this https URL .


International Forum of Allergy & Rhinology | 2012

An objective and automated method for assessing surgical skill in endoscopic sinus surgery using eye‐tracking and tool‐motion data

Narges Ahmidi; Masaru Ishii; Gabor Fichtinger; Gary L. Gallia; Gregory D. Hager

Assessment of surgical skill plays a crucial role in determining competency, monitoring educational programs, and providing trainee feedback. With the changing health care environment, it will likely play an important role in credentialing and maintenance of certification. The ideal skill assessment tool should be unbiased, objective, and accurate. We hypothesize that tool‐motion data—how a surgeon moves his/her instruments—and eye‐gaze data—what a surgeon looks at when he/she operates—contain sufficient information to quantitatively and objectively evaluate surgical skill. We investigate this hypothesis by developing a statistical model of surgery and testing the model experimentally in the context of endoscopic sinus surgery (ESS).


medical image computing and computer assisted intervention | 2012

Robotic path planning for surgeon skill evaluation in minimally-invasive sinus surgery

Narges Ahmidi; Gregory D. Hager; Lisa E. Ishii; Gary L. Gallia; Masaru Ishii

We observe that expert surgeons performing MIS learn to minimize their tool path length and avoid collisions with vital structures. We thus conjecture that an expert surgeons tool paths can be predicted by minimizing an appropriate energy function. We hypothesize that this reference path will be closer to an expert with greater skill, as measured by an objective measurement instrument such as objective structured assessment of technical skill (OSATS). To test this hypothesis, we have developed a surgical path planner (SPP) for functional endoscopic sinus surgery (FESS). We measure the similarity between an automatically generated reference path and surgical motions of subjects. We also develop a complementary similarity metric by translating tool motion to a coordinate-independent coding of motion, which we call the descriptive curve coding (DCC) method. We evaluate our methods on surgical motions recorded from FESS training tasks. The results show that the SPP reference path predicts the OSATS scores with 88% accuracy. We also show that motions coded with DCC predict OSATS scores with 90% accuracy. Finally, the combination of SPP and DCC identifies surgical skill with 93% accuracy.


Journal of Surgical Education | 2016

Task-Level vs. Segment-Level Quantitative Metrics for Surgical Skill Assessment

S. Swaroop Vedula; Anand Malpani; Narges Ahmidi; Sanjeev Khudanpur; Gregory D. Hager; Chi Chiung Grace Chen

OBJECTIVE Task-level metrics of time and motion efficiency are valid measures of surgical technical skill. Metrics may be computed for segments (maneuvers and gestures) within a task after hierarchical task decomposition. Our objective was to compare task-level and segment (maneuver and gesture)-level metrics for surgical technical skill assessment. DESIGN Our analyses include predictive modeling using data from a prospective cohort study. We used a hierarchical semantic vocabulary to segment a simple surgical task of passing a needle across an incision and tying a surgeons knot into maneuvers and gestures. We computed time, path length, and movements for the task, maneuvers, and gestures using tool motion data. We fit logistic regression models to predict experience-based skill using the quantitative metrics. We compared the area under a receiver operating characteristic curve (AUC) for task-level, maneuver-level, and gesture-level models. SETTING Robotic surgical skills training laboratory. PARTICIPANTS In total, 4 faculty surgeons with experience in robotic surgery and 14 trainee surgeons with no or minimal experience in robotic surgery. RESULTS Experts performed the task in shorter time (49.74s; 95% CI = 43.27-56.21 vs. 81.97; 95% CI = 69.71-94.22), with shorter path length (1.63m; 95% CI = 1.49-1.76 vs. 2.23; 95% CI = 1.91-2.56), and with fewer movements (429.25; 95% CI = 383.80-474.70 vs. 728.69; 95% CI = 631.84-825.54) than novices. Experts differed from novices on metrics for individual maneuvers and gestures. The AUCs were 0.79; 95% CI = 0.62-0.97 for task-level models, 0.78; 95% CI = 0.6-0.96 for maneuver-level models, and 0.7; 95% CI = 0.44-0.97 for gesture-level models. There was no statistically significant difference in AUC between task-level and maneuver-level (p = 0.7) or gesture-level models (p = 0.17). CONCLUSIONS Maneuver-level and gesture-level metrics are discriminative of surgical skill and can be used to provide targeted feedback to surgical trainees.


PLOS ONE | 2016

Analysis of the Structure of Surgical Activity for a Suturing and Knot-Tying Task

S. Swaroop Vedula; Anand Malpani; Lingling Tao; George Major Chen; Yixin Gao; Piyush Poddar; Narges Ahmidi; Christopher Paxton; René Vidal; Sanjeev Khudanpur; Gregory D. Hager; Chi Chiung Grace Chen

Background Surgical tasks are performed in a sequence of steps, and technical skill evaluation includes assessing task flow efficiency. Our objective was to describe differences in task flow for expert and novice surgeons for a basic surgical task. Methods We used a hierarchical semantic vocabulary to decompose and annotate maneuvers and gestures for 135 instances of a surgeon’s knot performed by 18 surgeons. We compared counts of maneuvers and gestures, and analyzed task flow by skill level. Results Experts used fewer gestures to perform the task (26.29; 95% CI = 25.21 to 27.38 for experts vs. 31.30; 95% CI = 29.05 to 33.55 for novices) and made fewer errors in gestures than novices (1.00; 95% CI = 0.61 to 1.39 vs. 2.84; 95% CI = 2.3 to 3.37). Transitions among maneuvers, and among gestures within each maneuver for expert trials were more predictable than novice trials. Conclusions Activity segments and state flow transitions within a basic surgical task differ by surgical skill level, and can be used to provide targeted feedback to surgical trainees.


Proceedings of SPIE | 2009

Intraoperative localization of brachytherapy implants using intensity-based registration

Zahra Karimaghaloo; Purang Abolmaesumi; Narges Ahmidi; Thomas Kuiran Chen; David G. Gobbi; Gabor Fichtinger

In prostate brachytherapy, a transrectal ultrasound (TRUS) will show the prostate boundary but not all the implanted seeds, while fluoroscopy will show all the seeds clearly but not the boundary. We propose an intensity-based registration between TRUS images and the implant reconstructed from fluoroscopy as a means of achieving accurate intra-operative dosimetry. The TRUS images are first filtered and compounded, and then registered to the fluoroscopy model via mutual information. A training phantom was implanted with 48 seeds and imaged. Various ultrasound filtering techniques were analyzed, and the best results were achieved with the Bayesian combination of adaptive thresholding, phase congruency, and compensation for the non-uniform ultrasound beam profile in the elevation and lateral directions. The average registration error between corresponding seeds relative to the ground truth was 0.78 mm. The effect of false positives and false negatives in ultrasound were investigated by masking true seeds in the fluoroscopy volume or adding false seeds. The registration error remained below 1.01 mm when the false positive rate was 31%, and 0.96 mm when the false negative rate was 31%. This fully automated method delivers excellent registration accuracy and robustness in phantom studies, and promises to demonstrate clinically adequate performance on human data as well.


JAMA Facial Plastic Surgery | 2018

Association Between Surgical Trainee Daytime Sleepiness and Intraoperative Technical Skill When Performing Septoplasty

Ya Wei Tseng; S. Swaroop Vedula; Anand Malpani; Narges Ahmidi; Kofi Boahene; Ira D. Papel; Theda C. Kontis; Jessica Maxwell; John R. Wanamaker; Patrick J. Byrne; Sonya Malekzadeh; Gregory D. Hager; Lisa E. Ishii; Masaru Ishii

Importance Daytime sleepiness in surgical trainees can impair intraoperative technical skill and thus affect their learning and pose a risk to patient safety. Objective To determine the association between daytime sleepiness of surgeons in residency and fellowship training and their intraoperative technical skill during septoplasty. Design, Setting, and Participants This prospective cohort study included 19 surgical trainees in otolaryngology–head and neck surgery programs at 2 academic institutions (Johns Hopkins University School of Medicine and MedStar Georgetown University Hospital). The physicians were recruited from June 13, 2016, to April 20, 2018. The analysis includes data that were captured between June 27, 2016, and April 20, 2018. Main Outcomes and Measures Attending physician and surgical trainee self-rated intraoperative technical skill using the Septoplasty Global Assessment Tool (SGAT) and visual analog scales. Daytime sleepiness reported by surgical trainees was measured using the Epworth Sleepiness Scale (ESS). Results Of 19 surgical trainees, 17 resident physicians (9 female [53%]) and 2 facial plastic surgery fellowship physicians (1 female and 1 male) performed a median of 3.00 septoplasty procedures (range, 1-9 procedures) under supervision by an attending physician. Of the 19 surgical trainees, 10 (53%) were aged 25 to 30 years and 9 (47%) were 31 years or older. The mean ESS score overall was 6.74 (95% CI, 5.96-7.52), and this score did not differ between female and male trainees. The mean ESS score was 7.57 (95% CI, 6.58-8.56) in trainees aged 25 to 30 years and 5.44 (95% CI, 4.32-6.57) in trainees aged 31 years or older. In regression models adjusted for sex, age, postgraduate year, and technical complexity of the procedure, there was a statistically significant inverse association between ESS scores and attending physician–rated technical skill for both SGAT (−0.41; 95% CI, −0.55 to −0.27; P < .001) and the visual analog scale (−0.75; 95% CI, −1.40 to −0.07; P = .03). The association between ESS scores and technical skill was not statistically significant for trainee self-rated SGAT (0.04; 95% CI, −0.17 to 0.24; P = .73) and the self-rated visual analog scale (0.19; 95% CI, −0.79 to 1.2; P = .70). Conclusions and Relevance The findings suggest that daytime sleepiness of surgical trainees is inversely associated with attending physician–rated intraoperative technical skill when performing septoplasty. Thus, surgical trainees’ ability to learn technical skill in the operating room may be influenced by their daytime sleepiness. Level of Evidence NA.


Proceedings of SPIE--the International Society for Optical Engineering | 2009

Intra-operative Localization of Brachytherapy Implants Using Intensity-based Registration.

Zahra Karimaghaloo; Purang Abolmaesumi; Narges Ahmidi; Thomas Kuiran Chen; David G. Gobbi; Gabor Fichtinger

In prostate brachytherapy, a transrectal ultrasound (TRUS) will show the prostate boundary but not all the implanted seeds, while fluoroscopy will show all the seeds clearly but not the boundary. We propose an intensity-based registration between TRUS images and the implant reconstructed from fluoroscopy as a means of achieving accurate intra-operative dosimetry. The TRUS images are first filtered and compounded, and then registered to the fluoroscopy model via mutual information. A training phantom was implanted with 48 seeds and imaged. Various ultrasound filtering techniques were analyzed, and the best results were achieved with the Bayesian combination of adaptive thresholding, phase congruency, and compensation for the non-uniform ultrasound beam profile in the elevation and lateral directions. The average registration error between corresponding seeds relative to the ground truth was 0.78 mm. The effect of false positives and false negatives in ultrasound were investigated by masking true seeds in the fluoroscopy volume or adding false seeds. The registration error remained below 1.01 mm when the false positive rate was 31%, and 0.96 mm when the false negative rate was 31%. This fully automated method delivers excellent registration accuracy and robustness in phantom studies, and promises to demonstrate clinically adequate performance on human data as well.

Collaboration


Dive into the Narges Ahmidi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa E. Ishii

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Masaru Ishii

Johns Hopkins University School of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anand Malpani

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gary L. Gallia

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Piyush Poddar

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

René Vidal

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge