Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shun Miao is active.

Publication


Featured researches published by Shun Miao.


medical image computing and computer assisted intervention | 2013

System and Method for 3-D/3-D Registration between Non-contrast-enhanced CBCT and Contrast-Enhanced CT for Abdominal Aortic Aneurysm Stenting

Shun Miao; Rui Liao; Marcus Pfister; Li Zhang; Vincent Ordy

In this paper, we present an image guidance system for abdominal aortic aneurysm stenting, which brings pre-operative 3-D computed tomography (CT) into the operating room by registering it against intra-operative non-contrast-enhanced cone-beam CT (CBCT). Registration between CT and CBCT volumes is a challenging task due to two factors: the relatively low signal-to-noise ratio of the abdominal aorta in CBCT without contrast enhancement, and the drastically different field of view between the two image modalities. The proposed automatic registration method handles the first issue through a fast quasi-global search utilizing surrogate 2-D images, and solves the second problem by relying on neighboring dominant structures of the abdominal aorta (i.e. the spine) for initial coarse alignment, and using a confined and image-processed volume of interest around the abdominal aorta for fine registration. The proposed method is validated offline using 17 clinical datasets, and achieves 1.48 mm target registration error and 100% success rate in 2.83 s. The prototype system has been installed in hospitals for clinical trial and applied in around 30 clinical cases, with 100% success rate reported qualitatively.


medical image computing and computer-assisted intervention | 2017

Robust non-rigid registration through agent-based action learning

Julian Krebs; Tommaso Mansi; Hervé Delingette; Li Zhang; Florin C. Ghesu; Shun Miao; Andreas K. Maier; Nicholas Ayache; Rui Liao; Ali Kamen

Robust image registration in medical imaging is essential for comparison or fusion of images, acquired from various perspectives, modalities or at different times. Typically, an objective function needs to be minimized assuming specific a priori deformation models and predefined or learned similarity measures. However, these approaches have difficulties to cope with large deformations or a large variability in appearance. Using modern deep learning (DL) methods with automated feature design, these limitations could be resolved by learning the intrinsic mapping solely from experience. We investigate in this paper how DL could help organ-specific (ROI-specific) deformable registration, to solve motion compensation or atlas-based segmentation problems for instance in prostate diagnosis. An artificial agent is trained to solve the task of non-rigid registration by exploring the parametric space of a statistical deformation model built from training data. Since it is difficult to extract trustworthy ground-truth deformation fields, we present a training scheme with a large number of synthetically deformed image pairs requiring only a small number of real inter-subject pairs. Our approach was tested on inter-subject registration of prostate MR data and reached a median DICE score of .88 in 2-D and .76 in 3-D, therefore showing improved results compared to state-of-the-art registration algorithms.


AE-CAI | 2013

Intensity-Based 3D-2D Mesh-to-Image Registration Using Mesh-Based Digitally Reconstructed Radiography

Shun Miao; Tri Huynh; Cyprien Adnet; Marcus Pfister; Rui Liao

Intensity-based 3D-2D registration is a well-established technique shown to be effective for many clinical applications. However, it is valid mainly for 3D Computed Tomography (CT) volume to 2D X-ray image registration because the computation of volume-based Digitally Reconstructed Radiography (DRR) relies on the linear relationship between CT’s intensity and the attenuation coefficient of the underlying structure for X-ray. This paper introduces a mesh-based DRR renderer that simulates realistic-looking X-ray images from 3D meshes, which can be used to replace conventional volume-based DRR in intensity-based 3D-2D registration for 3D volumes from various image modalities. The proposed renderer calculates the travel distance of a given ray within the mesh, and computes X-ray attenuation based on the travel distance and the object’s attenuation property. The proposed method also uses a novel ray-casting strategy that takes GPU architecture into consideration for high computational efficiency. Validation results show that the proposed mesh-based DRR simulates X-ray images with a high fidelity, and intensity-based 3D-2D registration using the resulting mesh-based DRR achieves satisfactory results on clinical data.


medical image computing and computer assisted intervention | 2018

Task Driven Generative Modeling for Unsupervised Domain Adaptation: Application to X-ray Image Segmentation

Yue Zhang; Shun Miao; Tommaso Mansi; Rui Liao

Automatic parsing of anatomical objects in X-ray images is critical to many clinical applications in particular towards image-guided invention and workflow automation. Existing deep network models require a large amount of labeled data. However, obtaining accurate pixel-wise labeling in X-ray images relies heavily on skilled clinicians due to the large overlaps of anatomy and the complex texture patterns. On the other hand, organs in 3D CT scans preserve clearer structures as well as sharper boundaries and thus can be easily delineated. In this paper, we propose a novel model framework for learning automatic X-ray image parsing from labeled CT scans. Specifically, a Dense Image-to-Image network (DI2I) for multi-organ segmentation is first trained on X-ray like Digitally Reconstructed Radiographs (DRRs) rendered from 3D CT volumes. Then we introduce a Task Driven Generative Adversarial Network (TD-GAN) architecture to achieve simultaneous style transfer and parsing for unseen real X-ray images. TD-GAN consists of a modified cycle-GAN substructure for pixel-to-pixel translation between DRRs and X-ray images and an added module leveraging the pre-trained DI2I to enforce segmentation consistency. The TD-GAN framework is general and can be easily adapted to other learning tasks. In the numerical experiments, we validate the proposed model on 815 DRRs and 153 topograms. While the vanilla DI2I without any adaptation fails completely on segmenting the topograms, the proposed model does not require any topogram labels and is able to provide a promising average dice of \(85\%\) which achieves the same level accuracy of supervised training (88%).


medical image computing and computer assisted intervention | 2016

Towards Automated Ultrasound Transesophageal Echocardiography and X-Ray Fluoroscopy Fusion Using an Image-Based Co-registration Method

Shanhui Sun; Shun Miao; Tobias Heimann; Terrence Chen; Markus Kaiser; Matthias John; Erin Girard; Rui Liao

Transesophageal Echocardiography (TEE) and X-Ray fluoroscopy are two routinely used real-time image guidance modalities for interventional procedures, and co-registering them into the same coordinate system enables advanced hybrid image guidance by providing augmented and complimentary information. In this paper, we present an image-based system of co-registering these two modalities through real-time tracking of the 3D position and orientation of a moving TEE probe from 2D fluoroscopy images. The 3D pose of the TEE probe is estimated fully automatically using a detection based visual tracking algorithm, followed by intensity-based 3D-to-2D registration refinement. In addition, to provide high reliability for clinical use, the proposed system can automatically recover from tracking failures. The system is validated on over 1900 fluoroscopic images from clinical trial studies, and achieves a success rate of 93.4 % at 2D target registration error (TRE) less than 2.5 mm and an average TRE of 0.86 mm, demonstrating high accuracy and robustness when dealing with poor image quality caused by low radiation dose and pose ambiguity caused by probe self-symmetry.


computer assisted radiology and surgery | 2018

3D/2D model-to-image registration by imitation learning for cardiac procedures

Daniel Toth; Shun Miao; Tanja Kurzendorfer; Christopher Aldo Rinaldi; Rui Liao; Tommaso Mansi; Kawaldeep Singh Rhode; Peter Mountney

PurposeIn cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application.MethodsThis paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images.ResultsAccuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was


Journal of medical imaging | 2018

Pairwise domain adaptation module for CNN-based 2-D/3-D registration

Jiannan Zheng; Shun Miao; Z. Jane Wang; Rui Liao


medical image computing and computer-assisted intervention | 2017

Learning CNNs with Pairwise Domain Adaption for Real-Time 6DoF Ultrasound Transducer Detection and Tracking from X-Ray Images.

Jiannan Zheng; Shun Miao; Rui Liao

2.92\pm 2.22\,\hbox { mm}


national conference on artificial intelligence | 2016

An Artificial Agent for Robust Image Registration.

Rui Liao; Shun Miao; Pierre de Tournemire; Sasa Grbic; Ali Kamen; Tommaso Mansi; Dorin Comaniciu


national conference on artificial intelligence | 2018

Dilated FCN for Multi-Agent 2D/3D Medical Image Registration

Shun Miao; Sebastien Piat; Peter Walter Fischer; Ahmet Tuysuzoglu; Philip Mewes; Tommaso Mansi; Rui Liao

2.92±2.22mm on 1000 test cases, superior to that of manual (

Collaboration


Dive into the Shun Miao's collaboration.

Researchain Logo
Decentralizing Knowledge