Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bing Jian is active.

Publication


Featured researches published by Bing Jian.


Archive | 2015

Cross-Modality Vertebrae Localization and Labeling Using Learning-Based Approaches

Yiqiang Zhan; Bing Jian; Dewan Maneesh; Xiang Sean Zhou

Spine is one of the major organs in human body. It consists of multiple vertebrae and inter-vertebral discs. As the locations and labels of vertebrae provide a vertical reference framework to different organs in the torso, they play an important role in various neurological, orthopaedic and oncological studies. On the other hand, however, manual localization and labeling of vertebrae is often time consuming. Therefore, automatic vertebrae localization and labeling has drawn significant attentions in the community of medical image analysis. While some pioneer studies aim to localize and label vertebrae using domain knowledge, more recent studies tackle this problem via machine learning technologies. With the spirit of “data-driven”, learning-based approaches are able to extract the appearance and geometric characteristics of vertebrae more efficient and effective than hand-crafted algorithms. More importantly, it facilitates cross-modality vertebrae localization, i.e., a generic algorithm working on different imaging modalities. In this chapter, we start with a review of several representative learning-based vertebrae localization and labeling methods. The key ideas of these methods are re-visited. In order to achieve a solution that is robust to severe diseases (e.g., scoliosis) and imaging artifacts (e.g., metal artifacts), we propose a learning-based method with two novel components. First, instead of treating vertebrae/discs as either repetitive components or completely independent entities, we emulate a radiologist and use a hierarchial strategy to learn detectors dedicated to anchor (distinctive) vertebrae, bundle (non-distinctive) vertebrae and inter-vertebral discs, respectively. At run-time, anchor vertebrae are detected concurrently to provide redundant and distributed appearance cues robust to local imaging artifacts. Bundle vertebrae detectors provide candidates of vertebrae with subtle appearance differences, whose labels are mutually determined by anchor vertebrae to gain additional robustness. Disc locations are derived from a cloud of responses from disc detectors, which is robust to sporadic voxel-level errors. Second, owing to the non-rigidness of spine anatomies, we employ a local articulated model to effectively model the spatial relations across vertebrae and discs. The local articulated model fuses appearance cues from different detectors in a way that is robust to abnormal spine geometry caused by severe diseases. Our method is validated on a large scale of CT (189) and MR (300) spine scans. It exhibits robust performance, especially to cases with severe diseases and imaging artifacts.


MCBR-CDS'09 Proceedings of the First MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support | 2009

Robust learning-based annotation of medical radiographs

Yimo Tao; Zhigang Peng; Bing Jian; Jianhua Xuan; Arun Krishnan; Xiang Sean Zhou

In this paper, we propose a learning-based algorithm for automatic medical image annotation based on sparse aggregation of learned local appearance cues, achieving high accuracy and robustness against severe diseases, imaging artifacts, occlusion, or missing data. The algorithm starts with a number of landmark detectors to collect local appearance cues throughout the image, which are subsequently verified by a group of learned sparse spatial configuration models. In most cases, a decision could already be made at this stage by simply aggregating the verified detections. For the remaining cases, an additional global appearance filtering step is employed to provide complementary information to make the final decision. This approach is evaluated on a large-scale chest radiograph view identification task, demonstrating an almost perfect performance of 99.98% for a posteroanterior/anteroposterior (PA-AP) and lateral view position identification task, compared with the recently reported large-scale result of only 98.2% [1]. Our approach also achieved the best accuracies for a three-class and a multi-class radiograph annotation task, when compared with other state of the art algorithms. Our algorithm has been integrated into an advanced image visualization workstation, enabling content-sensitive hanging-protocols and auto-invocation of a computer aided detection algorithm for PA-AP chest images.


international workshop on pattern recognition in neuroimaging | 2013

Brain PET Attenuation Correction without CT: An Investigation

Maneesh Dewan; Yiqiang Zhan; Gerardo Hermosillo; Bing Jian; Xiang Sean Zhou

In the last decade, Brain PET Imaging has taken big strides in becoming an effective diagnostic tool for dementia and epilepsy disorders, particularly Alzheimers. CT is often used to provide information for PET attenuation correction. However, for dementia patients, which often require multiple follow-ups, the elimination of CT is desirable to reduce the radiation dose. In this paper, we present a robust algorithm for PET attenuation correction without CT. The algorithm involves building a database of non-attenuation corrected (NAC) PET and CT pairs (model scans). Given a new patients NAC PET, a learning-based algorithm is used to detect key landmarks, which are then used to select the most similar model scans. Deformable registration is then employed to warp the model CTs to the subject space, followed by a fusion step to obtain the virtual CT for attenuation correction. Besides comparing the normalized AC values with ground truth, we also use a diagnostic tool to evaluate the solution. In addition, a diagnostic evaluation is conducted by a trained nuclear medicine physician, all with promising results.


multimedia information retrieval | 2010

Redundancy, redundancy, redundancy: the three keys to highly robust anatomical parsing in medical images

Xiang Sean Zhou; Zhigang Peng; Yiqiang Zhan; Maneesh Dewan; Bing Jian; Arun Krishnan; Yimo Tao; Martin Harder; Stefan Grosskopf; Ute Feuerlein


Archive | 2009

Quotient Appearance Manifold Mapping For Image Classification

Yoshihisa Shinagawa; Yuping Lin; Gerardo Hermosillo Valadez; Bing Jian


Archive | 2009

Iterative Segmentation of Images for Computer-Aided Detection

Yoshihisa Shinagawa; Gerardo Hermosillo Valadez; Bing Jian


Archive | 2009

System and method for automatically classifying regions-of-interest

Gerardo Hermosillo Valadez; Bing Jian; Yoshihisa Shinagawa


Archive | 2009

Efficient Estimator Of Pharmacokinetic Parameters in Breast MRI

Yoshihisa Shinagawa; Vandana Mohan; Gerardo Hermosillo Valadez; Bing Jian


medical image computing and computer-assisted intervention | 2009

Robust Learning-Based Annotation of Medical Radiographs

Yimo Tao; Zhigang Peng; Bing Jian; Jianhua Xuan; Arun Krishnan; Xiang Sean Zhou


Archive | 2009

Efficient Features for Computer-Aided Detection

Yoshihisa Shinagawa; Gerardo Hermosillo Valadez; Bing Jian

Collaboration


Dive into the Bing Jian's collaboration.

Researchain Logo
Decentralizing Knowledge