Ramkrishnan Narayanan
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ramkrishnan Narayanan.
international symposium on biomedical imaging | 2009
Ramkrishnan Narayanan; John Kurhanewicz; Katsuto Shinohara; E. D. Crawford; Anne R. Simoneau; Jasjit S. Suri
T2-weighted magnetic resonance images (MRI) imaging using an endorectal coil combined with a pelvic phased-array coil has been shown to provide high resolution images of the prostate. To integrate MRI analysis in standard prostate biopsy procedures, preoperative MRI must be accurately registered to 3-D transrectal ultrasound (TRUS) images. Shape changes due to patient motion, or drugs can induce further differences in glandular shape variation between preoperative MRI and 3-D TRUS during biopsy. In the proposed work, we model the deformation relating MRI and TRUS so as to enable analysis of MRI in conjunction with ultrasound (color blended or side-by-side) for planning of biopsy targets. Registration of MRI at various resolutions and endorectal balloon volumes and ultrasound volumes yielded average fiducial registration error of 3.06 mm using 6 and 12 bead phantoms.
information processing in medical imaging | 2005
Ramkrishnan Narayanan; Jeffrey A. Fessler; Hyunjin Park; Charles R. Meyer
Many types of transformations are used to model deformations in medical image registration. While some focus on modeling local changes, some on continuity and invertibility, there is no closed-form nonlinear parametric approach that addresses all these properties. This paper presents a class of nonlinear transformations that are local, continuous and invertible under certain conditions. They are straightforward to implement, fast to compute and can be used particularly in cases where locally affine deformations need to be recovered. We use our new transformation model to demonstrate some results on synthetic images using a multi-scale approach to multi-modality mutual information based image registration. The original images were deformed using B-splines at three levels of scale. The results show that the proposed method can recover these deformations almost completely with very few iterations of a gradient based optimizer.
Journal of Ultrasound in Medicine | 2008
Feimo Shen; Katsuto Shinohara; Dinesh Kumar; Animesh Khemka; Anne R. Simoneau; Priya N. Werahera; Lu Li; Yujun Guo; Ramkrishnan Narayanan; Liyang Wei; Al Barqawi; E. David Crawford; Christos Davatzikos; Jasjit S. Suri
Objective. Image‐guided prostate biopsy has become routine in medical diagnosis. Although it improves biopsy outcome, it mostly operates in 2 dimensions, therefore lacking presentation of information in the complete 3‐dimensional (3D) space. Because prostatic carcinomas are nonuniformly distributed within the prostate gland, it is crucial to accurately guide the needles toward clinically important locations within the 3D volume for both diagnosis and treatment. Methods. We reviewed the uses of 3D image‐guided needle procedures in prostate cancer diagnosis and cancer therapy as well as their advantages, work flow, and future directions. Results. Guided procedures for the prostate rely on accurate 3D target identification and needle navigation. This 3D approach has potential for better disease diagnosis and therapy. Additionally, when fusing together different imaging modalities and cancer probability maps obtained from a population of interest, physicians can potentially place biopsy needles and other interventional devices more accurately and efficiently by better targeting regions that are likely to host cancerous tissue. Conclusions. With the information from anatomic, metabolic, functional, biochemical, and biomechanical statuses of different regions of the entire gland, prostate cancers will be better diagnosed and treated with improved work flow.
Journal of Ultrasound in Medicine | 2009
Yujun Guo; Priya N. Werahera; Ramkrishnan Narayanan; Lu Li; Dinesh Kumar; E. David Crawford; Jasjit S. Suri
Objective. For a follow‐up prostate biopsy procedure, it is useful to know the previous biopsy locations in anatomic relation to the current transrectal ultrasound (TRUS) scan. The goal of this study was to validate the performance of a 3‐dimensional TRUS‐guided prostate biopsy system that can accurately relocate previous biopsy sites. Methods. To correlate biopsy locations from a sequence of visits by a patient, the prostate surface data obtained from a previous visit needs to be registered to the follow‐up visits. Two interpolation methods, thin‐plate spline (TPS) and elastic warping (EW), were tested for registration of the TRUS prostate image to follow‐up scans. We validated our biopsy system using a custom‐built phantom. Beads were embedded inside the phantom and were located in each TRUS scan. We recorded the locations of the beads before and after pressures were applied to the phantom and then compared them with computer‐estimated positions to measure performance. Results. In our experiments, before system processing, the mean target registration error (TRE) ± SD was 6.4 ± 4.5 mm (range, 3–13 mm). After registration and TPS interpolation, the TRE was 5.0 ± 1.03 mm (range, 2–8 mm). After registration and EW interpolation, the TRE was 2.7 ± 0.99 mm (range, 1–4 mm). Elastic warping was significantly better than the TPS in most cases (P < .0011). For clinical applications, EW can be implemented on a graphics processing unit with an execution time of less than 2.5 seconds. Conclusions. Elastic warping interpolation yields more accurate results than the TPS for registration of TRUS prostate images. Experimental results indicate potential for clinical application of this method.
information processing in medical imaging | 2007
Bing Ma; Ramkrishnan Narayanan; Hyunjin Park; Alfred O. Hero; Peyton H. Bland; Charles R. Meyer
The interest in registering a set of images has quickly risen in the field of medical image analysis. Mutual information (MI) based methods are well-established for pairwise registration but their extension to higher dimensions (multiple images) has encountered practical implementation difficulties. We extend the use of alpha mutual information (alphaMI) as the similarity measure to simultaneously register multiple images. alphaMI of a set of images can be directly estimated using entropic graphs spanning feature vectors extracted from the images, which is demonstrated to be practically feasible for joint registration. In this paper we are specifically interested in monitoring malignant tumor changes using simultaneous registration of multiple interval MR or CT scans. Tumor scans are typically a decorrelating sequence due to the cycles of heterogeneous cell death and growth. The accuracy of joint and pairwise registration using entropic graph methods is evaluated by registering several sets of interval exams. We show that for the parameters we investigated simultaneous joint registration method yields lower average registration errors compared to pairwise. Different degrees of decorrelation in the serial scans are studied and registration performance suggests that an appropriate scanning interval can be determined for efficiently monitoring lesion changes. Different levels of observation noise are added to the image sequences and the experimental results show that entropic graph based methods are robust and can be used reliably for multiple image registration.
international conference of the ieee engineering in medicine and biology society | 2008
Feimo Shen; Ramkrishnan Narayanan; Jasjit S. Suri
Image-guided procedures have become routine in medicine. Due to the nature of three-dimensional (3-D) structure of the target organs, two-dimensional (2-D) image acquisition is gradually being replaced by 3-D imaging. Specifically in the diagnosis of prostate cancer, biopsy can be performed using 3-D transrectal ultrasound (TRUS) image guidance. Because prostatic cancers are multifocal, it is crucial to accurately guide biopsy needles towards planned targets. Further the gland tends to move due to external physical disturbances, discomfort introduced by the procedure or intrinsic peristalsis. As a result the exact position of the gland must be rapidly updated so as to correspond with the originally acquired 3-D TRUS volume prior to biopsy planning. A graphics processing unit (GPU) is used in this study to compute rapid updates performing 3-D motion compensation via registration of the live 2-D image and the acquired 3-D TRUS volume. The parallel computational framework on the GPU is exploited resulting in mean compute times of 0.46 seconds for updating the position of a live 2-D buffer image containing 91,000 pixels. A 2x sub-sampling resulted in a further improvement to 0.19 seconds. With the increase in GPU multiprocessors and sub-sampling, we observe that real time motion compensation can be achieved.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Saradwata Sarkar; Ramkrishnan Narayanan; Hyunjin Park; Bing Ma; Peyton H. Bland; Charles R. Meyer
Standard clinical radiological techniques for determining lesion volume changes in interval exams are, as far as we know, quantitatively non-descriptive or approximate at best. We investigate two new registration based methods that help sketch an improved quantitative picture of lesion volume changes in hepatic interval CT exams. The first method, Jacobian Integration, employs a constrained Thin Plate Spline warp to compute the deformation of the lesion of interest over the intervals. The resulting jacobian map of the deformation is integrated to yield the net lesion volume change. The technique is fast, accurate and requires no segmentation, but is sensitive to misregistration. The second scheme uses a Weighted Gray Value Difference image of two registered interval exams to estimate the change in lesion volume. A linear weighting and trimming curve is used to accurately account for the contribution of partial voxels. This technique is insensitive to slight misregistration and useful in analyzing simple lesions with uniform contrast or lesions with insufficient mutual information to allow the computation of an accurate warp. The methods are tested on both synthetic and in vivo liver lesions and results are evaluated against estimates obtained through careful manual segmentation of the lesions. Our findings so far have given us reason to believe that the estimators are reliable. Further experiments on numerous in vivo lesions will probably establish the improved efficacy of these methods in supporting earlier detection of new disease or conversion from stable to progressive disease in comparison to existing clinical estimation techniques.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Liyang Wei; Ramkrishnan Narayanan; Dinesh Kumar; Aaron Fenster; Albaha Barqawi; Priya N. Werahera; E. David Crawford; Jasjit S. Suri
Prostate volume is an indirect indicator for several prostate diseases. Volume estimation is a desired requirement during prostate biopsy, therapy and clinical follow up. Image segmentation is thus necessary. Previously, discrete dynamic contour (DDC) was implemented in orthogonal unidirectional on the slice-by-slice basis for prostate boundary estimation. This suffered from the glitch that it needed stopping criteria during the propagation of segmentation procedure from slice-to-slice. To overcome this glitch, axial DDC was implemented and this suffered from the fact that central axis never remains fixed and wobbles during propagation of segmentation from slice-to-slice. The effect of this was a multi-fold reconstructed surface. This paper presents a bidirectional DDC approach, thereby removing the two glitches. Our bidirectional DDC protocol was tested on a clinical dataset on 28 3-D ultrasound image volumes acquired using side fire Philips transrectal ultrasound. We demonstrate the orthogonal bidirectional DDC strategy achieved the most accurate volume estimation compared with previously published orthogonal unidirectional DDC and axial DDC methods. Compared to the ground truth, we show that the mean volume estimation errors were: 18.48%, 9.21% and 7.82% for unidirectional, axial and bidirectional DDC methods, respectively. The segmentation architecture is implemented in Visual C++ in Windows environment.
Journal of the Acoustical Society of America | 2013
Jasjit S. Suri; Ramkrishnan Narayanan
A system and method (i.e, utility) are disclosed for positioning a needle in three-dimensions based on patient related statistics for extracting tissue during biopsy procedures. Aspects of the utility can be applied independently or serve as an aid to the urologist when regions of interest are hard to discern in an ultrasound image. Regions of interest that correspond to high cancer risk regions (e.g., statistically) are automatically superimposed on an ultrasound image of a patient in real time. Additionally a statistical map based on one or more demographic parameters of a patient and containing cancer probability locations are also automatically mapped on the ultrasound image in real time displaying potential cancer locations. Aspects of the system are also capable of displaying optimal needle placement positions based on statistical priors and will be able to accurately navigate the needle to that position for biopsy extraction and/or treatment.
Proceedings of SPIE | 2012
Xin Li; Dinesh Kumar; Saradwata Sarkar; Ramkrishnan Narayanan
Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).