Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dongfeng Han is active.

Publication


Featured researches published by Dongfeng Han.


IEEE Transactions on Medical Imaging | 2013

Optimal Co-Segmentation of Tumor in PET-CT Images With Context Information

Qi Song; Junjie Bai; Dongfeng Han; Sudershan K. Bhatia; Wenqing Sun; William M. Rockey; John E. Bayouth; John M. Buatti; Xiaodong Wu

Positron emission tomography (PET)-computed tomography (CT) images have been widely used in clinical practice for radiotherapy treatment planning of the radiotherapy. Many existing segmentation approaches only work for a single imaging modality, which suffer from the low spatial resolution in PET or low contrast in CT. In this work, we propose a novel method for the co-segmentation of the tumor in both PET and CT images, which makes use of advantages from each modality: the functionality information from PET and the anatomical structure information from CT. The approach formulates the segmentation problem as a minimization problem of a Markov random field model, which encodes the information from both modalities. The optimization is solved using a graph-cut based method. Two sub-graphs are constructed for the segmentation of the PET and the CT images, respectively. To achieve consistent results in two modalities, an adaptive context cost is enforced by adding context arcs between the two sub-graphs. An optimal solution can be obtained by solving a single maximum flow problem, which leads to simultaneous segmentation of the tumor volumes in both modalities. The proposed algorithm was validated in robust delineation of lung tumors on 23 PET-CT datasets and two head-and-neck cancer subjects. Both qualitative and quantitative results show significant improvement compared to the graph cut methods solely using PET or CT.


information processing in medical imaging | 2011

Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method

Dongfeng Han; John E. Bayouth; Qi Song; Aakant Taurani; Milan Sonka; John M. Buatti; Xiaodong Wu

Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.


international conference on computer vision | 2009

Optimal multiple surfaces searching for video/image resizing - a graph-theoretic approach

Dongfeng Han; Xiaodong Wu; Milan Sonka

Content-aware video/image resizing is of increasing relevance to allow high-quality image and video resizing to be displayed on devices with different resolution. In this paper, we present a novel algorithm to find multiple 3-D surfaces simultaneously with globally optimal solution for video/image resizing. Our algorithm is based on graph theory and it first analyzes the video/image data to define the energy value for each voxel. Then, a 4-D graph is constructed and the costs are assigned according to the energy values. Finally, multiple 3-D surfaces are detected by a global optimization process which can be solved via s-t graph cuts. By removing or inserting these multiple 3-D surfaces, content-aware video/image resizing is achieved. We also have proved that our algorithm can find the globally optimal solution for crossing surfaces problem, in which several surfaces can cross each other. The proposed method is demonstrated on a variety of video/image data and compared to the state of the art in video/image resizing.


The Visual Computer | 2010

Optimal multiple-seams search for image resizing with smoothness and shape prior

Dongfeng Han; Milan Sonka; John E. Bayouth; Xiaodong Wu

Content-aware image resizing is of increasing relevance to allow high-quality image and video to be displayed on devices with different resolution. We present a novel method to find multiple seams simultaneously with global optimality for image resizing, incorporating both region smoothness and seam shape prior using a 3-D graph-theoretic approach. The globally optimal seams can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the resizing problem in an arc-weighted graph, we can incorporate a wide spectrum of constraints into the formulation, thus improving resizing results. By removing or inserting those multiple seams, the goal of content-aware image resizing is achieved. Due to simultaneous detection of multiple seams, our algorithm exhibits several good features: the ability to handle both crossing and non-crossing-seam cases, the ability to incorporate various feasible geometry constraints, and the ability to incorporate the seams importance, region smoothness and shape prior information. The proposed method was implemented and experimented on a variety of image data and compared with the state of the art in image resizing.


computer vision and pattern recognition | 2011

Feature guided motion artifact reduction with structure-awareness in 4D CT images

Dongfeng Han; John E. Bayouth; Qi Song; Sudershan K. Bhatia; Milan Sonka; Xiaodong Wu

In this paper, we propose a novel method to reduce the magnitude of 4D CT artifacts by stitching two images with a data-driven regularization constrain, which helps preserve the local anatomy structures. Our method first computes an interface seam for the stitching in the overlapping region of the first image, which passes through the “smoothest” region, to reduce the structure complexity along the stitching interface. Then, we compute the displacements of the seam by matching the corresponding interface seam in the second image. We use sparse 3D features as the structure cues to guide the seam matching, in which a regularization term is incorporated to keep the structure consistency. The energy function is minimized by solving a multiple-label problem in Markov Random Fields with an anatomical structure preserving regularization term. The displacements are propagated to the rest of second image and the two image are stitched along the interface seams based on the computed displacement field. The method was tested on both simulated data and clinical 4D CT images. The experiments on simulated data demonstrated that the proposed method was able to reduce the landmark distance error on average from 2.9 mm to 1.3 mm, outperforming the registration-based method by about 55%. For clinical 4D CT image data, the image quality was evaluated by three medical experts, and all identified much fewer artifacts from the resulting images by our method than from those by the compared methods.


medical image computing and computer assisted intervention | 2010

Motion artifact reduction in 4D helical CT: graph-based structure alignment

Dongfeng Han; John E. Bayouth; Sudershan K. Bhatia; Milan Sonka; Xiaodong Wu

Four dimensional CT (4D CT) provides a way to reduce positional uncertainties caused by respiratory motion. Due to the inconsistencies of patients breathing, images from different respiratory periods may be misaligned, thus the acquired 3D data may not accurately represent the anatomy. In this paper, we propose a method based on graph algorithms to reduce the magnitude of artifacts present in helical 4D CT images. The method strives to reduce the magnitude of artifacts directly from the reconstructed images. The experiments on simulated data showed that the proposed method reduced the landmarks distance errors from 2.7 mm to 1.5 mm, outperforming the registration methods by about 42%. For clinical 4D CT image data, the image quality was evaluated by the three medical experts and both of who identified much fewer artifacts from the resulting images by our method than from those by the commercial 4D CT software.


Medical Physics | 2010

SU‐GG‐I‐108: Reduce Artifacts for Helical 4D CT Image

Dongfeng Han; John E. Bayouth; Sudershan K. Bhatia; Milan Sonka; Xiaodong Wu

Purpose Four dimensional CT (4D CT) provides a way to reduce positional uncertainties caused by respiratory motion. Due to the inconsistencies of patient breathing, images from different periods may be misaligned and the acquired 3D data may not represent the real anatomy. We propose a method to reduce the artifacts present in helical 4D CTimages.Method and Material 4D‐CT data are formed by temporal concatenation of 3D phase‐specific datasets, with artifacts occurring between adjacent stacks acquired from successive respiratory periods. Our general approach to removing these artifacts is to find a surface in one stack and computing the corresponding surface in the next stack, then align the two surfaces by deformation. The proposed method includes five steps: (1) Initial non‐rigid registration using a B‐Spline registration technique; (2) Searching a surface in the first stack with the graph search method; (3) Finding the surface flow in the second stack with the graph cut method; (4) Propagation the flow to the rest of the second stack by solving a Laplace equation; (5) Warping the stacks with linear interpolation to produce the artifact‐reduced image. Ground truth was established and the method tested with five 3D CTimagedata sets with seven landmarks in each. The results on clinical 4D CTimages are compared to commercial software and judged by the medical observers. Results The landmarks distances errors from ground truth were reduced by 42% from 2.7 mm to 1.5 mm by the proposed method. All observers identified fewer artifacts in the images created with the proposed method. Conclusion The proposed method provides a way to reduce the magnitude of artifacts directly from the reconstructed images.


Medical Physics | 2009

SU‐FF‐I‐117: Evaluation of Measure Discontinuity Metrics for 4‐D CT Reconstruction Data

Dongfeng Han; John E. Bayouth; Xiaodong Wu; Milan Sonka

Introduction Measure discontinuity plays a key role for a successful radiotherapytreatment planning in 4‐D radiotherapy therapy, which is increasingly used for lungcancer treatment. An important issue is to define a metric that can determine which breath period is well trained and the obtained image data can be used to reconstruct the 4‐D volume. We evaluate several metrics on 4‐D CT data. Our research can give some practical guidance to determine the quality of the acquired data. Method and Material Supporting s 1 is above s 2, the difference between the last slice in s 1 and the first slice in s 2 is computed (denoted as d). d should be following a distribution p. If we have no prior knowledge, Gaussian distribution is a reasonable choice that is d∼Gaussian(μ,ϐ). The parameters of the distribution can be learned from the differences sets computed from s 1 and s 2. The metrics are computed based on these parameters. Results (1) One of the metrics is effective which can detect about 83.3% stacks with large artifacts. By ROC analysis, the error at equal error rate (intersection with curve diagonal) is 0.833, the threshold at equal error is 0.0320 and the area under curve is 0.8747. (2) Pre‐processing is important. (3) Not all metrics work well. Conclusion Measure discontinuity is a fundamental issue in 4‐D reconstruction, which is not well addressed in the state‐of‐the‐art. The conclusions of our work are: (1) Ground truth data on large scale 4‐D CT data should be created. To our knowledge, no work is done is this field; (2) The distribution of d, should be analyzed further. May be more complicated distributions are better than Gasussian distribution.


Medical Physics | 2011

Characterization and identification of spatial artifacts during 4D-CT imaging

Dongfeng Han; John E. Bayouth; Sudershan K. Bhatia; Milan Sonka; Xiaodong Wu


international symposium on algorithms and computation | 2011

Maximum weight digital regions decomposable into digital star-shaped regions

Matt Gibson; Dongfeng Han; Milan Sonka; Xiaodong Wu

Collaboration


Dive into the Dongfeng Han's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John E. Bayouth

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenqing Sun

University of Texas at El Paso

View shared research outputs
Researchain Logo
Decentralizing Knowledge