Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ayushi Sinha is active.

Publication


Featured researches published by Ayushi Sinha.


Proceedings of SPIE | 2016

Endoscopic-CT: learning-based photometric reconstruction for endoscopic sinus surgery

Austin Reiter; Simon Leonard; Ayushi Sinha; Masaru Ishii; Russell H. Taylor; Gregory D. Hager

In this work we present a method for dense reconstruction of anatomical structures using white light endoscopic imagery based on a learning process that estimates a mapping between light reflectance and surface geometry. Our method is unique in that few unrealistic assumptions are considered (i.e., we do not assume a Lambertian reflectance model nor do we assume a point light source) and we learn a model on a per-patient basis, thus increasing the accuracy and extensibility to different endoscopic sequences. The proposed method assumes accurate video-CT registration through a combination of Structure-from-Motion (SfM) and Trimmed-ICP, and then uses the registered 3D structure and motion to generate training data with which to learn a multivariate regression of observed pixel values to known 3D surface geometry. We demonstrate with a non-linear regression technique using a neural network towards estimating depth images and surface normal maps, resulting in high-resolution spatial 3D reconstructions to an average error of 0.53mm (on the low side, when anatomy matches the CT precisely) to 1.12mm (on the high side, when the presence of liquids causes scene geometry that is not present in the CT for evaluation). Our results are exhibited on patient data and validated with associated CT scans. In total, we processed 206 total endoscopic images from patient data, where each image yields approximately 1 million reconstructed 3D points per image.


Proceedings of SPIE | 2016

Automatic segmentation and statistical shape modeling of the paranasal sinuses to estimate natural variations

Ayushi Sinha; Simon Leonard; Austin Reiter; Masaru Ishii; Russell H. Taylor; Gregory D. Hager

We present an automatic segmentation and statistical shape modeling system for the paranasal sinuses which allows us to locate structures in and around the sinuses, as well as to observe the variability in these structures. This system involves deformably registering a given patient image to a manually segmented template image, and using the resulting deformation field to transfer labels from the template to the patient image. We use 3D snake splines to correct errors in this initial segmentation. Once we have several accurately segmented images, we build statistical shape models to observe the population mean and variance for each structure. These shape models are useful to us in several ways. Regular registration methods are insufficient to accurately register pre-operative computed tomography (CT) images with intra-operative endoscopy video of the sinuses. This is because of deformations that occur in structures containing erectile tissue. Our aim is to estimate these deformations using our shape models in order to improve video-CT registration, as well as to distinguish normal variations in anatomy from abnormal variations, and automatically detect and stage pathology. We can also compare the mean shapes and variances in different populations, such as different genders or ethnicities, in order to observe differences and similarities, as well as in different age groups in order to observe the developmental changes that occur in the sinuses.


Proceedings of SPIE | 2016

Image-based navigation for functional endoscopic sinus surgery using structure from motion

Simon Leonard; Austin Reiter; Ayushi Sinha; Masaru Ishii; Russell H. Taylor; Gregory D. Hager

Functional Endoscopic Sinus Surgery (FESS) is a challenging procedure for otolaryngologists and is the main surgical approach for treating chronic sinusitis, to remove nasal polyps and open up passageways. To reach the source of the problem and to ultimately remove it, the surgeons must often remove several layers of cartilage and tissues. Often, the cartilage occludes or is within a few millimeters of critical anatomical structures such as nerves, arteries and ducts. To make FESS safer, surgeons use navigation systems that register a patient to his/her CT scan and track the position of the tools inside the patient. Current navigation systems, however, suffer from tracking errors greater than 1 mm, which is large when compared to the scale of the sinus cavities, and errors of this magnitude prevent from accurately overlaying virtual structures on the endoscope images. In this paper, we present a method to facilitate this task by 1) registering endoscopic images to CT data and 2) overlaying areas of interests on endoscope images to improve the safety of the procedure. First, our system uses structure from motion (SfM) to generate a small cloud of 3D points from a short video sequence. Then, it uses iterative closest point (ICP) algorithm to register the points to a 3D mesh that represents a section of a patients sinuses. The scale of the point cloud is approximated by measuring the magnitude of the endoscopes motion during the sequence. We have recorded several video sequences from five patients and, given a reasonable initial registration estimate, our results demonstrate an average registration error of 1.21 mm when the endoscope is viewing erectile tissues and an average registration error of 0.91 mm when the endoscope is viewing non-erectile tissues. Our implementation SfM + ICP can execute in less than 7 seconds and can use as few as 15 frames (0.5 second of video). Future work will involve clinical validation of our results and strengthening the robustness to initial guesses and erectile tissues.


Proceedings of SPIE | 2017

Simultaneous segmentation and correspondence improvement using statistical modes

Ayushi Sinha; Austin Reiter; Simon Leonard; Masaru Ishii; Gregory D. Hager; Russell H. Taylor

With the increasing amount of patient information that is being collected today, the idea of using this information to inform future patient care has gained momentum. In many cases, this information comes in the form of medical images. Several algorithms have been presented to automatically segment these images, and to extract structures relevant to different diagnostic or surgical procedures. Consequently, this allows us to obtain large data-sets of shapes, in the form of triangular meshes, segmented from these images. Given correspondences between these shapes, statistical shape models (SSMs) can be built using methods like Principal Component Analysis (PCA). Often, the initial correspondences between the shapes need to be improved, and SSMs can be used to improve these correspondences. However, just as often, initial segmentations also need to be improved. Unlike many correspondence improvement algorithms, which do not affect segmentation, many segmentation improvement algorithms negatively affect correspondences between shapes. We present a method that iteratively improves both segmentation as well as correspondence by using SSMs not only to improve correspondence, but also to constrain the movement of vertices during segmentation improvement. We show that our method is able to maintain correspondence while achieving as good or better segmentations than those produced by methods that improve segmentation without maintaining correspondence. We are additionally able to achieve segmentations with better triangle quality than segmentations produced without correspondence improvement.


medical image computing and computer assisted intervention | 2016

Anatomically Constrained Video-CT Registration via the V-IMLOP Algorithm

Seth Billings; Ayushi Sinha; Austin Reiter; Simon Leonard; Masaru Ishii; Gregory D. Hager; Russell H. Taylor

Functional endoscopic sinus surgery (FESS) is a surgical procedure used to treat acute cases of sinusitis and other sinus diseases. FESS is fast becoming the preferred choice of treatment due to its minimally invasive nature. However, due to the limited field of view of the endoscope, surgeons rely on navigation systems to guide them within the nasal cavity. State of the art navigation systems report registration accuracy of over 1mm, which is large compared to the size of the nasal airways. We present an anatomically constrained video-CT registration algorithm that incorporates multiple video features. Our algorithm is robust in the presence of outliers. We also test our algorithm on simulated and in-vivo data, and test its accuracy against degrading initializations.


medical image computing and computer-assisted intervention | 2018

Endoscopic Navigation in the Absence of CT Imaging

Ayushi Sinha; Xingtong Liu; Austin Reiter; Masaru Ishii; Gregory D. Hager; Russell H. Taylor

Clinical examinations that involve endoscopic exploration of the nasal cavity and sinuses often do not have a reference image to provide structural context to the clinician. In this paper, we present a system for navigation during clinical endoscopic exploration in the absence of computed tomography (CT) scans by making use of shape statistics from past CT scans. Using a deformable registration algorithm along with dense reconstructions from video, we show that we are able to achieve submillimeter registrations in in-vivo clinical data and are able to assign confidence to these registrations using confidence criteria established using simulated data.


applied imagery pattern recognition workshop | 2015

Autonomous on-board Near Earth Object detection

Purnima Rajan; Philippe Burlina; Min Chen; D. Edell; Bruno M. Jedynak; N. Mehta; Ayushi Sinha; Gregory D. Hager

Most large asteroid population discovery has been accomplished to date by Earth-based telescopes. It is speculated that most of the smaller Near Earth Objects (NEOs) that are less than 100 meters in diameter, whose impact can create substantial city-size damage, have not yet been discovered. Many asteroids cannot be detected with an Earth-based telescope given their size and/or their location with respect to the Sun. We are investigating the feasibility of deploying asteroid detection algorithms on-board a spacecraft, thereby minimizing the expense and need to downlink large collection of images. Having autonomous on-board image analysis algorithms enables the deployment of a spacecraft at approximately 0.7 AU heliocentric or Earth-Sun L1/L2 halo orbits, removing some of the challenges associated with detecting asteroids with Earth-based telescopes. We describe an image analysis algorithmic pipeline developed and targeted for on-board asteroid detection and show that its performance is consistent with deployment on flight-qualified hardware.


IEEE Transactions on Medical Imaging | 2018

Evaluation and Stability Analysis of Video-Based Navigation System for Functional Endoscopic Sinus Surgery on In Vivo Clinical Data

Simon Leonard; Ayushi Sinha; Austin Reiter; Masaru Ishii; Gary L. Gallia; Russell H. Taylor; Gregory D. Hager


arXiv: Computer Vision and Pattern Recognition | 2014

Automatic Annotation of Axoplasmic Reticula in Pursuit of Connectomes using High-Resolution Neural EM Data.

Ayushi Sinha; William Gray Roncal; Narayanan Kasthuri; Jeff W. Lichtman; Randal C. Burns; Michael M. Kazhdan


arXiv: Computer Vision and Pattern Recognition | 2018

Self-supervised Learning for Dense Depth Estimation in Monocular Endoscopy.

Xingtong Liu; Ayushi Sinha; Mathias Unberath; Masaru Ishii; Gregory D. Hager; Russell H. Taylor; Austin Reiter

Collaboration


Dive into the Ayushi Sinha's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Austin Reiter

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Masaru Ishii

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Leonard

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seth Billings

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xingtong Liu

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge