Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shunxing Bao is active.

Publication


Featured researches published by Shunxing Bao.


Proceedings of SPIE | 2017

Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

Shunxing Bao; Frederick D. Weitendorf; Andrew J. Plassard; Yuankai Huo; Aniruddha S. Gokhale; Bennett A. Landman

The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.


arXiv: Computer Vision and Pattern Recognition | 2018

Splenomegaly segmentation using global convolutional kernels and conditional generative adversarial networks.

Yuankai Huo; Zhoubing Xu; Shunxing Bao; Camilo Bermudez; Andrew J. Plassard; Jiaqi Liu; Yuang Yao; Albert Assad; Richard G. Abramson; Bennett A. Landman

Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.


ieee international conference on cloud engineering | 2017

Algorithmic Enhancements to Big Data Computing Frameworks for Medical Image Processing

Shunxing Bao; Bennett A. Landman; Aniruddha S. Gokhale

Large-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-base approaches reveal that performance is impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based Big Data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise. Despite this promise, our preliminary studies have revealed that existing Big Data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBases load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). To address these challenges, this doctoral research is developing a range of performance optimization algorithms. This paper describes preliminary research we have conducted in this realm and presents a list of research tasks that will be undertaken as part of this doctoral research.


arXiv: Computer Vision and Pattern Recognition | 2018

Improved stability of whole brain surface parcellation with multi-atlas segmentation.

Yuankai Huo; Shunxing Bao; Prasanna Parvathaneni; Bennett A. Landman

Whole brain segmentation and cortical surface parcellation are essential in understanding the brain’s anatomicalfunctional relationships. Multi-atlas segmentation has been regarded as one of the leading segmentation methods for the whole brain segmentation. In our recent work, the multi-atlas technique has been adapted to surface reconstruction using a method called Multi-atlas CRUISE (MaCRUISE). The MaCRUISE method not only performed consistent volumesurface analyses but also showed advantages on robustness compared with the FreeSurfer method. However, a detailed surface parcellation was not provided by MaCRUISE, which hindered the region of interest (ROI) based analyses on surfaces. Herein, the MaCRUISE surface parcellation (MaCRUISEsp) method is proposed to perform the surface parcellation upon the inner, central and outer surfaces that are reconstructed from MaCRUISE. MaCRUISEsp parcellates inner, central and outer surfaces with 98 cortical labels respectively using a volume segmentation based surface parcellation (VSBSP), following a topological correction step. To validate the performance of MaCRUISEsp, 21 scanrescan magnetic resonance imaging (MRI) T1 volume pairs from the Kirby21 dataset were used to perform a reproducibility analyses. MaCRUISEsp achieved 0.948 on median Dice Similarity Coefficient (DSC) for central surfaces. Meanwhile, FreeSurfer achieved 0.905 DSC for inner surfaces and 0.881 DSC for outer surfaces, while the proposed method achieved 0.929 DSC for inner surfaces and 0.835 DSC for outer surfaces. Qualitatively, the results are encouraging, but are not directly comparable as the two approaches use different definitions of cortical labels.


Medical Imaging 2018: Image Processing | 2018

Fully convolutional neural networks improve abdominal organ segmentation.

Meg F. Bobo; Shunxing Bao; Yuankai Huo; Yuang Yao; John Virostko; Andrew J. Plassard; Ilwoo Lyu; Albert Assad; Richard G. Abramson; Melissa A. Hilmes; Bennett A. Landman

Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI’s with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI’s acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI’s with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1


computer software and applications conference | 2016

Reasoning for CPS Education Using Surrogate Simulation Models

Shunxing Bao; Joseph Porter; Aniruddha S. Gokhale

With developing an affordable, easily accessible and scalable online CPS laboratory to promote CPS education system, we are faced with and focused on a number of cyber-physical challenges including the model design and simulation strategies. The authors provide a complete process to simulate a behavior of a user-design CPS conveyor system. The user-design model is sent to the background, treated offline, and extracted the simulation result and finally feedback to user as an animation. The solution approach has two main parts, as the aspect of the modeling work, complex domain-specific conveyor design are defined in the Generic Modeling Environment (GME), it can be mapped and transformed to the global grid, another domain-specific model, which contains only one kind of node with huge dimension so that all different species of components in complex model are mapped to the typical nodes in grid, and it is easy to operate and simulate the nodes in global grid to fit for the need when multiple experiments being mapped to the grid. In this work, we only concerned the scenario of one experiment. The transformation and mapping process is implemented through Graph Rewriting and Transformation. As a background simulation, the Robocodes code is automatically generated by GME interpreter from global grid and is applied to generate the path logic to transmit the package, according to the package type in each input ports. After acquiring the transmit speed and path, Robocode simulation outputs the coordinate and time information to generate the Java animation. The final Java animation will be feedback to the user side to see the result of package transmission flow.


medical image computing and computer-assisted intervention | 2018

Spatially Localized Atlas Network Tiles Enables 3D Whole Brain Segmentation from Limited Data.

Yuankai Huo; Zhoubing Xu; Katherine Aboud; Prasanna Parvathaneni; Shunxing Bao; Camilo Bermudez; Susan M. Resnick; Laurie E. Cutting; Bennett A. Landman

Whole brain segmentation on a structural magnetic resonance imaging (MRI) is essential in non-invasive investigation for neuroanatomy. Historically, multi-atlas segmentation (MAS) has been regarded as the de facto standard method for whole brain segmentation. Recently, deep neural network approaches have been applied to whole brain segmentation by learning random patches or 2D slices. Yet, few previous efforts have been made on detailed whole brain segmentation using 3D networks due to the following challenges: (1) fitting entire whole brain volume into 3D networks is restricted by the current GPU memory, and (2) the large number of targeting labels (e.g., > 100 labels) with limited number of training 3D volumes (e.g., 30 hours using MAS to ~15 minutes using the proposed method. The source code is available online this https URL


Journal of Digital Imaging | 2018

Towards Portable Large-Scale Image Processing with High-Performance Computing

Yuankai Huo; Justin A. Blaber; Stephen M. Damon; Brian D. Boyd; Shunxing Bao; Prasanna Parvathaneni; Camilo Bermudez Noguera; Shikha Chaganti; Vishwesh Nath; Jasmine M. Greer; Ilwoo Lyu; William R. French; Allen T. Newton; Baxter P. Rogers; Bennett A. Landman

High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called “spiders.” The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.


Proceedings of SPIE | 2016

Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

Shunxing Bao; Stephen M. Damon; Bennett A. Landman; Aniruddha S. Gokhale

Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.


international symposium on biomedical imaging | 2018

Adversarial synthesis learning enables segmentation without target modality ground truth

Yuankai Huo; Zhoubing Xu; Shunxing Bao; Albert Assad; Richard G. Abramson; Bennett A. Landman

Collaboration


Dive into the Shunxing Bao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ilwoo Lyu

Vanderbilt University

View shared research outputs
Researchain Logo
Decentralizing Knowledge