Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dijia Wu is active.

Publication


Featured researches published by Dijia Wu.


computer vision and pattern recognition | 2010

Stratified learning of local anatomical context for lung nodules in CT images

Dijia Wu; Le Lu; Jinbo Bi; Yoshihisa Shinagawa; Kim L. Boyer; Arun Krishnan; Marcos Salganicoff

The automatic detection of lung nodules attached to other pulmonary structures is a useful yet challenging task in lung CAD systems. In this paper, we propose a stratified statistical learning approach to recognize whether a candidate nodule detected in CT images connects to any of three other major lung anatomies, namely vessel, fissure and lung wall, or is solitary with background parenchyma. First, we develop a fully automated voxel-by-voxel labeling/segmentation method of nodule, vessel, fissure, lung wall and parenchyma given a 3D lung image, via a unified feature set and classifier under conditional random field. Second, the generated Class Probability Response Maps (PRM) by voxel-level classifiers, are used to form the so-called pairwise Probability Co-occurrence Maps (PCM) which encode the spatial contextual correlations of the candidate nodule, in relation to other anatomical landmarks. Based on PCMs, higher level classifiers are trained to recognize whether the nodule touches other pulmonary structures, as a multi-label problem. We also present a new iterative fissure structure enhancement filter with superior performance. For experimental validation, we create an annotated database of 784 subvolumes with nodules of various sizes, shapes, densities and contextual anatomies, and from 239 patients. High accuracy of multi-class voxel labeling is achieved 89.3% ∼ 91.2%. The Area under ROC Curve (AUC) of vessel, fissure and lung wall connectivity classification reaches 0.8676, 0.8692 and 0.9275, respectively.


computer vision and pattern recognition | 2011

AdaBoost on low-rank PSD matrices for metric learning

Jinbo Bi; Dijia Wu; Le Lu; Meizhu Liu; Yimo Tao; Matthias Wolf

The problem of learning a proper distance or similarity metric arises in many applications such as content-based image retrieval. In this work, we propose a boosting algorithm, MetricBoost, to learn the distance metric that preserves the proximity relationships among object triplets: object i is more similar to object j than to object k. Metric-Boost constructs a positive semi-definite (PSD) matrix that parameterizes the distance metric by combining rank-one PSD matrices. Different options of weak models and combination coefficients are derived. Unlike existing proximity preserving metric learning which is generally not scalable, MetricBoost employs a bipartite strategy to dramatically reduce computation cost by decomposing proximity relationships over triplets into pair-wise constraints. Met-ricBoost outperforms the state-of-the-art on two real-world medical problems: 1. identifying and quantifying diffuse lung diseases; 2. colorectal polyp matching between different views, as well as on other benchmark datasets.


computer vision and pattern recognition | 2009

A min-max framework of cascaded classifier with multiple instance learning for computer aided diagnosis

Dijia Wu; Jinbo Bi; Kim L. Boyer

The computer aided diagnosis (CAD) problems of detecting potentially diseased structures from medical images are typically distinguished by the following challenging characteristics: extremely unbalanced data between negative and positive classes; stringent real-time requirement of online execution; multiple positive candidates generated for the same malignant structure that are highly correlated and spatially close to each other. To address all these problems, we propose a novel learning formulation to combine cascade classification and multiple instance learning (MIL) in a unified min-max framework, leading to a joint optimization problem which can be converted to a tractable quadratically constrained quadratic program and efficiently solved by block-coordinate optimization algorithms. We apply the proposed approach to the CAD problems of detecting pulmonary embolism and colon cancer from computed tomography images. Experimental results show that our approach significantly reduces the computational cost while yielding comparable detection accuracy to the current state-of-the-art MIL or cascaded classifiers. Although not specifically designed for balanced MIL problems, the proposed method achieves superior performance on balanced MIL benchmark data such as MUSK and image data sets.


computer vision and pattern recognition | 2012

A learning based deformable template matching method for automatic rib centerline extraction and labeling in CT images

Dijia Wu; David Liu; Zoltan Puskas; Chao Lu; Andreas Wimmer; Christian Tietjen; Grzegorz Soza; S. Kevin Zhou

The automatic extraction and labeling of the rib centerlines is a useful yet challenging task in many clinical applications. In this paper, we propose a new approach integrating rib seed point detection and template matching to detect and identify each rib in chest CT scans. The bottom-up learning based detection exploits local image cues and top-down deformable template matching imposes global shape constraints. To adapt to the shape deformation of different rib cages whereas maintain high computational efficiency, we employ a Markov Random Field (MRF) based articulated rigid transformation method followed by Active Contour Model (ACM) deformation. Compared with traditional methods that each rib is individually detected, traced and labeled, the new approach is not only much more robust due to prior shape constraints of the whole rib cage, but removes tedious post-processing such as rib pairing and ordering steps because each rib is automatically labeled during the template matching. For experimental validation, we create an annotated database of 112 challenging volumes with ribs of various sizes, shapes, and pathologies such as metastases and fractures. The proposed approach shows orders of magnitude higher detection and labeling accuracy than state-of-the-art solutions and runs about 40 seconds for a complete rib cage on the average.


machine vision applications | 2010

Texture based prelens tear film segmentation in interferometry images

Dijia Wu; Kim L. Boyer; Jason J. Nichols; Peter Ewen King-Smith

Interferometric imaging has been identified as a novel approach to the evaluation of prelens tear film (PLTF) thickness in contact lens patients. In this paper, we present a texture based segmentation approach for the detection of tear film breakup regions on interferometry images. First, the textural information was extracted from the studied images using a bank of Gabor filters. A novel classifier, EM-MDA, which integrates traditional Expectation-Maximization with Multiple Discriminant Analysis, was then trained for the recognition of breakup regions of the PLTF. Experimental results provided a correct classification rate of 91.0% which proved significantly higher compared to traditional EM or well known Linear Discriminant Analysis.


computer vision and pattern recognition | 2010

Sign ambiguity resolution for phase demodulation in interferometry with application to prelens tear film analysis

Dijia Wu; Kim L. Boyer

We present a novel method to solve sign ambiguity for phase demodulation from a single interferometric image that possibly contains closed fringes. The problem is formulated in a binary pairwise energy minimization framework based on phase gradient orientation continuity. The objective function is non-submodular and therefore its minimization is an NP-hard problem, for which we devise a multigrid hierarchy of quadratic pseudoboolean optimization problems that can be improved iteratively to approximate the optimal solutions. Compared with traditional path-following phase demodulation methods, the new approach does not require any heuristic scanning strategy, it is not subject to the propagation of error, and the extension to three dimensional fringe patterns is straightforward. A set of experiments with synthetic data and real prelens tear film interferometric images of the human eye demonstrate the effectiveness and robustness of the proposed algorithm in comparison with existing state-of-the-art phase demodulation methods.


arXiv: Computer Vision and Pattern Recognition | 2013

Semantic Context Forests for Learning-Based Knee Cartilage Segmentation in 3D MR Images

Quan Wang; Dijia Wu; Le Lu; Meizhu Liu; Kim L. Boyer; Shaohua Kevin Zhou

The automatic segmentation of human knee cartilage from 3D MR images is a useful yet challenging task due to the thin sheet structure of the cartilage with diffuse boundaries and inhomogeneous intensities. In this paper, we present an iterative multi-class learning method to segment the femoral, tibial and patellar cartilage simultaneously, which effectively exploits the spatial contextual constraints between bone and cartilage, and also between different cartilages. First, based on the fact that the cartilage grows in only certain area of the corresponding bone surface, we extract the distance features of not only to the surface of the bone, but more informatively, to the densely registered anatomical landmarks on the bone surface. Second, we introduce a set of iterative discriminative classifiers that at each iteration, probability comparison features are constructed from the class confidence maps derived by previously learned classifiers. These features automatically embed the semantic context information between different cartilages of interest. Validated on a total of 176 volumes from the Osteoarthritis Initiative (OAI) dataset, the proposed approach demonstrates high robustness and accuracy of segmentation in comparison with existing state-of-the-art MR cartilage segmentation methods.


International MICCAI Workshop on Medical Computer Vision | 2013

Computer Aided Diagnosis Using Multilevel Image Features on Large-Scale Evaluation

Le Lu; Pandu R. Devarakota; Siddharth Vikal; Dijia Wu; Yefeng Zheng; Matthias Wolf

Computer aided diagnosis (CAD) of cancerous anatomical structures via 3D medical images has emerged as an intensively studied research area. In this paper, we present a principled three-tiered image feature learning approach to capture task specific and data-driven class discriminative statistics from an annotated image database. It integrates voxel-, instance-, and database-level feature learning, aggregation and parsing. The initial segmentation is proceeded as robust voxel labeling and thresholding. After instance-level spatial aggregation, extracted features can also be flexibly tuned for classifying lesions, or discriminating different subcategories of lesions. We demonstrate the effectiveness in the lung nodule detection task which handles all types of solid, partial-solid, and ground-glass nodules using the same set of learned features. Our hierarchical feature learning framework, which was extensively trained and validated on large-scale multiple site datasets of \(879\) CT volumes (510 training and 369 validation), achieves superior performance than other state-of-the-art CAD systems. The proposed method is also shown to be applicable for colonic polyp detection, including all polyp morphological subcategories, via 770 tagged-prep CT scans from multiple medical sites (358 training and 412 validation).


medical image computing and computer assisted intervention | 2014

Segmentation of Multiple Knee Bones from CT for Orthopedic Knee Surgery Planning

Dijia Wu; Michal Sofka; Neil Birkbeck; S. Kevin Zhou

Patient-specific orthopedic knee surgery planning requires precisely segmenting from 3D CT images multiple knee bones, namely femur, tibia, fibula, and patella, around the knee joint with severe pathologies. In this work, we propose a fully automated, highly precise, and computationally efficient segmentation approach for multiple bones. First, each bone is initially segmented using a model-based marginal space learning framework for pose estimation followed by non-rigid boundary deformation. To recover shape details, we then refine the bone segmentation using graph cut that incorporates the shape priors derived from the initial segmentation. Finally we remove overlap between neighboring bones using multi-layer graph partition. In experiments, we achieve simultaneous segmentation of femur, tibia, patella, and fibula with an overall accuracy of less than 1mm surface-to-surface error in less than 90s on hundreds of 3D CT scans with pathological knee joints.


international conference on computer vision | 2009

Resilient Subclass Discriminant Analysis

Dijia Wu; Kim L. Boyer

We propose a dimension reduction technique named Resilient Subclass Discriminant Analysis (RSDA) for high dimensional classification problems. The technique iteratively estimates the subclass division by embedding the Fisher Discriminant Analysis (FDA) with Expectation-Maximization (EM) in Gaussian Mixture Models (GMM). The new method maintains the adaptability of SDA to a wide range of data distributions by approximating the distribution of each class as a mixture of Gaussians, and provides superior feature selection performance to SDA with modified EM clustering that estimates a posteriori probability of latent variables in lower-dimensional Fishers discriminant space, which also improves the robustness in problems of small training datasets compared with conventional EM algorithm. Extensive experiments and comparison results against other well-known Discriminant Analysis (DA) methods are presented using synthetic data, benchmark datasets as well as a real computational vision problem.

Collaboration


Dive into the Dijia Wu's collaboration.

Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Kim L. Boyer

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinbo Bi

University of Connecticut

View shared research outputs
Researchain Logo
Decentralizing Knowledge