Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingdan Zhang is active.

Publication


Featured researches published by Jingdan Zhang.


international conference on management of data | 2005

A system for analyzing and indexing human-motion databases

Guodong Liu; Jingdan Zhang; Wei Wang; Leonard McMillan

We demonstrate a data-driven approach for representing, compressing, and indexing human-motion databases. Our modeling approach is based on piecewise-linear components that are determined via a divisive clustering method. Selection of the appropriate linear model is determined automatically via a classifier using a subspace of the most significant, or principle features (markers). We show that, after offline training, our model can accurately estimate and classify human motions. We can also construct indexing structures for motion sequences according to their transition trajectories through these linear components. Our method not only provides indices for whole and/or partial motion sequences, but also serves as a compressed representation for the entire motion database. Our method also tends to be immune to temporal variations, and thus avoids the expense of time-warping.


medical image computing and computer assisted intervention | 2011

Automatic multi-organ segmentation using learning-based segmentation and level set optimization

Timo Kohlberger; Michal Sofka; Jingdan Zhang; Neil Birkbeck; Jens Wetzl; Jens N. Kaftan; Jerome Declerck; S. Kevin Zhou

We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.


computer vision and pattern recognition | 2007

Joint Real-time Object Detection and Pose Estimation Using Probabilistic Boosting Network

Jingdan Zhang; Shaohua Kevin Zhou; Leonard McMillan; Dorin Comaniciu

In this paper, we present a learning procedure called probabilistic boosting network (PBN) for joint real-time object detection and pose estimation. Grounded on the law of total probability, PBN integrates evidence from two building blocks, namely a multiclass boosting classifier for pose estimation and a boosted detection cascade for object detection. By inferring the pose parameter, we avoid the exhaustive scanning for the pose, which hampers real time requirement. In addition, we only need one integral image/volume with no need of image/volume rotation. We implement PBN using a graph-structured network that alternates the two tasks of foreground/background discrimination and pose estimation for rejecting negatives as quickly as possible. Compared with previous approaches, we gain accuracy in object localization and pose estimation while noticeably reducing the computation. We invoke PBN to detect the left ventricle from a 3D ultrasound volume, processing about 10 volumes per second, and the left atrium from 2D images in real time.


interactive 3d graphics and games | 2006

Human motion estimation from a reduced marker set

Guodong Liu; Jingdan Zhang; Wei Wang; Leonard McMillan

Motion capture data from human subjects exhibits considerable redundancy. In this paper, we propose novel methods for exploiting this redundancy. In particular, we set out to find a subset of motion-capture markers that are able to provide fast and high-quality predictions of the remaining markers. We then develop a model that uses this reduced marker set to predict the others. We demonstrate that this subset of original markers is sufficient to capture subtle variations in human motion.We take a data-driven modeling approach to learn piecewise local linear models from a marker-based training set. We first divide motion sequences into segments of low dimensionality. We then retrieve a feature vector from each of the motion segments and use these feature vectors as modeling primitives to cluster the segments into a hierarchy of local linear models via a divisive clustering method. The selection of an appropriate linear model for reconstruction of a full-body pose is determined automatically via a classifier driven by a reduced marker set. After offline training, our method can quickly reconstruct full-body human motion using a reduced marker set without storing and searching the large database. We also demonstrate our methods ability to generalize over a variety of motions from multiple subjects.


information processing in medical imaging | 2013

Rapid multi-organ segmentation using context integration and discriminative models

Nathan Lay; Neil Birkbeck; Jingdan Zhang; S. Kevin Zhou

We propose a novel framework for rapid and accurate segmentation of a cohort of organs. First, it integrates local and global image context through a product rule to simultaneously detect multiple landmarks on the target organs. The global posterior integrates evidence over all volume patches, while the local image context is modeled with a local discriminative classifier. Through non-parametric modeling of the global posterior, it exploits sparsity in the global context for efficient detection. The complete surface of the target organs is then inferred by robust alignment of a shape model to the resulting landmarks and finally deformed using discriminative boundary detectors. Using our approach, we demonstrate efficient detection and accurate segmentation of liver, kidneys, heart, and lungs in challenging low-resolution MR data in less than one second, and of prostate, bladder, rectum, and femoral heads in CT scans, in roughly one to three seconds and in both cases with accuracy fairly close to inter-user variability.


medical image computing and computer assisted intervention | 2011

Multi-stage learning for robust lung segmentation in challenging CT volumes

Michal Sofka; Jens Wetzl; Neil Birkbeck; Jingdan Zhang; Timo Kohlberger; Jens N. Kaftan; Jerome Declerck; S. Kevin Zhou

Simple algorithms for segmenting healthy lung parenchyma in CT are unable to deal with high density tissue common in pulmonary diseases. To overcome this problem, we propose a multi-stage learning-based approach that combines anatomical information to predict an initialization of a statistical shape model of the lungs. The initialization first detects the carina of the trachea, and uses this to detect a set of automatically selected stable landmarks on regions near the lung (e.g., ribs, spine). These landmarks are used to align the shape model, which is then refined through boundary detection to obtain fine-grained segmentation. Robustness is obtained through hierarchical use of discriminative classifiers that are trained on a range of manually annotated data of diseased and healthy lungs. We demonstrate fast detection (35s per volume on average) and segmentation of 2 mm accuracy on challenging data.


computer vision and pattern recognition | 2006

Robust Tracking and Stereo Matching under Variable Illumination

Jingdan Zhang; Leonard McMillan; Jingyi Yu

Illumination inconsistencies cause serious problems for classical computer vision applications such as tracking and stereo matching. We present a new approach to model illumination variations using an Illumination Ratio Map (IRM). An IRM computes the intensity ratio of corresponding points in an image pair. We formulate IRM recovery as a Markov network, which assumes spatially varying illumination changes can be modeled as a locally smooth function with boundaries. We show that the IRM Markov network can be easily incorporated into low-level vision problems, such as tracking and stereo matching, by integrating IRM estimation with the optical flow field/disparity map solution process. This leads to a unified Markov network. We develop an iterative optimization algorithm based on Belief Propagation to efficiently recover the illumination ratio map and the optical field/disparity map at the same time. Experiments demonstrate that our methods are robust and reliable.


Medical Imaging 2006: Physics of Medical Imaging | 2006

A multi-beam x-ray imaging system based on carbon nanotube field emitters

Jingdan Zhang; Guang Yang; Yueh Z. Lee; Y. Cheng; B. Gao; Q. Qiu; Jian Ping Lu; Otto Zhou

In this study, we report a multi-beam x-ray imaging system that can generate a scanning x-ray beam to image an object from multiple projection angles without mechanical motion. The key part of this imaging system is a multi-beam field emission x-ray (MBFEX) source which comprises a linear array of gated electron emitting pixels. The pixels are individually addressable via a MOSFET (metal-oxide-semiconductor field effect transistor) based electronic circuit. The device can provide a tube current of 0.1-1 mA at 40 kVp with less than 300 μm focal spot size from each of the emitting pixels. Multilayer images of different phantoms were reconstructed to demonstrate its potential applications in tomographic imaging. Since no mechanical motion is needed and the electronic switching time is generally negligible the MBFEX system has the potential to simplify the system design and lead to a fast data acquisition for tomographic imaging.


medical image computing and computer-assisted intervention | 2012

Precise segmentation of multiple organs in CT volumes using learning-based approach and information theory

Chao Lu; Yefeng Zheng; Neil Birkbeck; Jingdan Zhang; Timo Kohlberger; Christian Tietjen; Thomas Boettger; James S. Duncan; S. Kevin Zhou

In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.


computer vision and pattern recognition | 2010

Multiple object detection by sequential monte carlo and Hierarchical Detection Network

Michal Sofka; Jingdan Zhang; S. Kevin Zhou; Dorin Comaniciu

In this paper, we propose a novel framework for detecting multiple objects in 2D and 3D images. Since a joint multi-object model is difficult to obtain in most practical situations, we focus here on detecting the objects sequentially, one-by-one. The interdependence of object poses and strong prior information embedded in our domain of medical images results in better performance than detecting the objects individually. Our approach is based on Sequential Estimation techniques, frequently applied to visual tracking. Unlike in tracking, where the sequential order is naturally determined by the time sequence, the order of detection of multiple objects must be selected, leading to a Hierarchical Detection Network (HDN). We present an algorithm that optimally selects the order based on probability of states (object poses) within the ground truth region. The posterior distribution of the object pose is approximated at each step by sequential Monte Carlo. The samples are propagated within the sequence across multiple objects and hierarchical levels. We show on 2D ultrasound images of left atrium, that the automatically selected sequential order yields low mean detection error. We also quantitatively evaluate the hierarchical detection of fetal faces and three fetal brain structures in 3D ultrasound images.

Collaboration


Dive into the Jingdan Zhang's collaboration.

Top Co-Authors

Avatar

Otto Zhou

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

S Chang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jian Ping Lu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guang Yang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

R Peng

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge