Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruida Cheng is active.

Publication


Featured researches published by Ruida Cheng.


international conference of the ieee engineering in medicine and biology society | 2006

Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

Evan S. McCreedy; Ruida Cheng; Paul F. Hemler; Anand Viswanathan; Bradford J. Wood; Matthew J. McAuliffe

The radio frequency ablation segmentation tool (RFAST) is a software application developed using the National Institutes of Healths medical image processing analysis and visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize, and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented three dimensional (3-D) surface models enables the physician to interactively position the ablation probe to simulate burns and to semimanually simulate sphere packing in an attempt to optimize probe placement. This paper describes software systems contained in RFAST to address the needs of clinicians in planning, evaluating, and simulating RFA treatments of malignant hepatic tissue


Proceedings of SPIE | 2016

Active appearance model and deep learning for more accurate prostate segmentation on MRI

Ruida Cheng; Holger R. Roth; Le Lu; Shijun Wang; Baris Turkbey; William Gandler; Evan S. McCreedy; Harsh K. Agarwal; Peter L. Choyke; Ronald M. Summers; Matthew J. McAuliffe

Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.


international conference of the ieee engineering in medicine and biology society | 2014

Atlas based AAM and SVM model for fully automatic MRI prostate segmentation

Ruida Cheng; Baris Turkbey; William Gandler; Harsh K. Agarwal; Vijay P. Shah; Alexandra Bokinsky; Evan S. McCreedy; Shijun Wang; Sandeep Sankineni; Marcelino Bernardo; Thomas J. Pohida; Peter L. Choyke; Matthew J. McAuliffe

Automatic prostate segmentation in MR images is a challenging task due to inter-patient prostate shape and texture variability, and the lack of a clear prostate boundary. We propose a supervised learning framework that combines the atlas based AAM and SVM model to achieve a relatively high segmentation result of the prostate boundary. The performance of the segmentation is evaluated with cross validation on 40 MR image datasets, yielding an average segmentation accuracy near 90%.


arXiv: Computer Vision and Pattern Recognition | 2016

Gaze2Segment: A Pilot Study for Integrating Eye-Tracking Technology into Medical Image Segmentation

Naji Khosravan; Haydar Celik; Baris Turkbey; Ruida Cheng; Evan S. McCreedy; Matthew J. McAuliffe; Sandra Bednarova; Elizabeth Jones; Xinjian Chen; Peter L. Choyke; Bradford J. Wood; Ulas Bagci

In this study, we developed a novel system, called Gaze2Segment, integrating biological and computer vision techniques to support radiologists’ reading experience with an automatic image segmentation task. During diagnostic assessment of lung CT scans, the radiologists’ gaze information were used to create a visual attention map. Next, this map was combined with a computer-derived saliency map, extracted from the gray-scale CT images. The visual attention map was used as an input for indicating roughly the location of a region of interest. With computer-derived saliency information, on the other hand, we aimed at finding foreground and background cues for the object of interest found in the previous step. These cues are used to initiate a seed-based delineation process. The proposed Gaze2Segment achieved a dice similarity coefficient of 86% and Hausdorff distance of 1.45 mm as a segmentation accuracy. To the best of our knowledge, Gaze2Segment is the first true integration of eye-tracking technology into a medical image segmentation task without the need for any further user-interaction.


Journal of medical imaging | 2017

Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks

Ruida Cheng; Holger R. Roth; Nathan Lay; Le Lu; Baris Turkbey; William Gandler; Evan S. McCreedy; Thomas J. Pohida; Peter A. Pinto; Peter L. Choyke; Matthew J. McAuliffe; Ronald M. Summers

Abstract. Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of 89.77%±3.29% and a mean Jaccard similarity coefficient (IoU) of 81.59%±5.18% are used to calculate without trimming any end slices. The proposed holistic model significantly (p<0.001) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.


Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling | 2008

Java based volume rendering frameworks

Ruida Cheng; Alexandra Bokinsky; Paul F. Hemler; Evan S. McCreedy; Matthew J. McAuliffe

In recent years, the number and utility of 3-D rendering frameworks has grown substantially. A quantitative and qualitative evaluation of the capabilities of a subset of these systems is important to determine the applicability of these methods to typical medical visualization tasks. The libraries evaluated in this paper include the Java3D Application Programming Interface (API), Java OpenGL (Jogl) API, a multi-histogram software-based rendering method, and the WildMagic API. Volume renderer implementations using each of these frameworks were developed using the platform-independent Java programming language. Quantitative performance measurements (frames per second, memory usage) were used to evaluate the strengths and weaknesses of each implementation.


Proceedings of SPIE | 2017

Automatic MR prostate segmentation by deep learning with holistically-nested networks

Ruida Cheng; Holger R. Roth; Nathan Lay; Le Lu; Baris Turkbey; William Gandler; Evan S. McCreedy; Peter L. Choyke; Ronald M. Summers; Matthew J. McAuliffe

Accurate automatic prostate magnetic resonance image (MRI) segmentation is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. The proposed method performs end-to- end segmentation by integrating holistically nested edge detection with fully convolutional neural networks. Holistically-nested networks (HNN) automatically learn the hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 247 patients in 5-fold cross-validation. We achieve a mean Dice Similarity Coefficient of 88.70% and a mean Jaccard Similarity Coefficient of 80.29% without trimming any erroneous contours at apex and base.


Proceedings of SPIE | 2013

2D registration guided models for semi-automatic MRI prostate segmentation

Ruida Cheng; Baris Turkbey; Justin Senseney; Marcelino Bernardo; Alexandra Bokinsky; William Gandler; Evan S. McCreedy; Thomas J. Pohida; Peter L. Choyke; Matthew J. McAuliffe

Accurate segmentation of prostate magnetic resonance images (MRI) is a challenging task due to the variable anatomical structure of the prostate. In this work, two semi-automatic techniques for segmentation of T2-weighted MRI images of the prostate are presented. Both models are based on 2D registration that changes shape to fit the prostate boundary between adjacent slices. The first model relies entirely on registration to segment the prostate. The second model applies Fuzzy-C means and morphology filters on top of the registration in order to refine the prostate boundary. Key to the success of the two models is the careful initialization of the prostate contours, which requires specifying three Volume of Interest (VOI) contours to each axial, sagittal and coronal image. Then, a fully automatic segmentation algorithm generates the final results with the three images. The algorithm performance is evaluated with 45 MR image datasets. VOI volume, 3D surface volume and VOI boundary masks are used to quantify the segmentation accuracy between the semi-automatic and expert manual segmentations. Both models achieve an average segmentation accuracy of 90%. The proposed registration guided segmentation model has been generalized to segment a wide range of T2- weighted MRI prostate images.


computer-based medical systems | 2012

A flexible Java GPU-enhanced visualization framework and its applications

Ruida Cheng; Justin Senseney; Nishith Pandya; Evan S. McCreedy; Matthew J. McAuliffe; Alexandra Bokinsky

A flexible biomedical visualization framework implemented with Java, OpenGL, and OpenCL performs efficient volume rendering with large, multi-modal datasets. The framework takes advantage of the parallel processing power on modern graphics hardware with novel OpenCL and GLSL shading language implementations. The Java and GPU environment provide portable advanced biomedical image visualization applications. Several applications built on top of the GPU framework are also presented to show the extensibility of the application. These include multi-surface rendering, stereoscopic rendering, image fusion, and diffusion tensor visualization.


medical image computing and computer-assisted intervention | 2018

A Decomposable Model for the Detection of Prostate Cancer in Multi-parametric MRI.

Nathan Lay; Yohannes Tsehay; Yohan Sumathipala; Ruida Cheng; Sonia Gaur; Clayton P. Smith; Adrian Barbu; Le Lu; Baris Turkbey; Peter L. Choyke; Peter A. Pinto; Ronald M. Summers

Institutions that specialize in prostate MRI acquire different MR sequences owing to variability in scanning procedure and scanner hardware. We propose a novel prostate cancer detector that can operate in the absence of MR imaging sequences. Our novel prostate cancer detector first trains a forest of random ferns on all MR sequences and then decomposes these random ferns into a sum of MR sequence-specific random ferns enabling predictions to be made in the absence of one or more of these MR sequences. To accomplish this, we first show that a sum of random ferns can be exactly represented by another random fern and then we propose a method to approximately decompose an arbitrary random fern into a sum of random ferns. We show that our decomposed detector can maintain good performance when some MR sequences are omitted.

Collaboration


Dive into the Ruida Cheng's collaboration.

Top Co-Authors

Avatar

Matthew J. McAuliffe

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Evan S. McCreedy

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Baris Turkbey

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

William Gandler

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Peter L. Choyke

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Alexandra Bokinsky

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Bradford J. Wood

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ronald M. Summers

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge