Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianye Niu is active.

Publication


Featured researches published by Tianye Niu.


Medical Physics | 2011

Scatter correction for full‐fan volumetric CT using a stationary beam blocker in a single full scan

Tianye Niu; L Zhu

PURPOSE Applications of volumetric CT (VCT) are hampered by shading and streaking artifacts in the reconstructed images. These artifacts are mainly due to strong x-ray scatter signals accompanied with the large illumination area within one projection, which lead to CT number inaccuracy, image contrast loss and spatial nonuniformity. Although different scatter correction algorithms have been proposed in literature, a standard solution still remains unclear. Measurement-based methods use a beam blocker to acquire scatter samples. These techniques have unrivaled advantages over other existing algorithms in that they are simple and efficient, and achieve high scatter estimation accuracy without prior knowledge of the imaged object. Nevertheless, primary signal loss is inevitable in the scatter measurement, and multiple scans or moving the beam blocker during data acquisition are typically employed to compensate for the missing primary data. In this paper, we propose a new measurement-based scatter correction algorithm without primary compensation for full-fan VCT. An accurate reconstruction is obtained with one single-scan and a stationary x-ray beam blocker, two seemingly incompatible features which enable simple and efficient scatter correction without increase of scan time or patient dose. METHODS Based on the CT reconstruction theory, we distribute the blocked data over the projection area where primary signals are considered approximately redundant in a full scan, such that the CT image quality is not degraded even with primary loss. Scatter is then accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction algorithm. RESULTS The proposed method is evaluated using two phantom studies on a tabletop CBCT system. On the Catphan©600 phantom, our approach reduces the reconstruction error from 207 Hounsfield unit (HU) to 9 HU in the selected region of interest, and improves the image contrast by a factor of 2.0 in the high-contrast regions. On an anthropomorphic head phantom, the reconstruction error is reduced from 97 HU to 6 HU in the soft tissue region and image spatial nonuniformity decreases from 27% to 5% after correction. CONCLUSIONS Our method inherits the main advantages of measurement-based methods while avoiding their shortcomings. It has the potential to become a practical scatter correction solution widely implementable on different VCT systems.


Medical Physics | 2012

Accelerated barrier optimization compressed sensing (ABOCS) reconstruction for cone-beam CT: phantom studies.

Tianye Niu; L Zhu

PURPOSE Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. METHODS The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai-Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. RESULTS ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as demonstrated in both digital Shepp-Logan and physical head phantom studies, consistent reconstruction performances are achieved using the same algorithm parameters on scans with different noise levels and∕or on different objects. On the contrary, the penalty weight in a TV regularization based method needs to be fine-tuned in a large range (up to seven times) to maintain the reconstructed image quality. The improvement of ABOCS on computational efficiency is demonstrated in the comparisons with adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS), an existing CS reconstruction algorithm also using constrained optimization. ASD-POCS alternatively minimizes the TV objective using adaptive steepest descent (ASD) and the data fidelity error using projection onto convex sets (POCS). For similar image qualities of the Shepp-Logan phantom, ABOCS requires less computation time than ASD-POCS in MATLAB by more than one order of magnitude. CONCLUSIONS We propose ABOCS for CBCT reconstruction. As compared to other published CS-based algorithms, our method has attractive features of fast convergence and consistent parameter settings for different datasets. These advantages have been demonstrated on phantom studies.PURPOSE Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. METHODS The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai-Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. RESULTS ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as demonstrated in both digital Shepp-Logan and physical head phantom studies, consistent reconstruction performances are achieved using the same algorithm parameters on scans with different noise levels and/or on different objects. On the contrary, the penalty weight in a TV regularization based method needs to be fine-tuned in a large range (up to seven times) to maintain the reconstructed image quality. The improvement of ABOCS on computational efficiency is demonstrated in the comparisons with adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS), an existing CS reconstruction algorithm also using constrained optimization. ASD-POCS alternatively minimizes the TV objective using adaptive steepest descent (ASD) and the data fidelity error using projection onto convex sets (POCS). For similar image qualities of the Shepp-Logan phantom, ABOCS requires less computation time than ASD-POCS inMATLAB by more than one order of magnitude. CONCLUSIONS We propose ABOCS for CBCT reconstruction. As compared to other published CS-based algorithms, our method has attractive features of fast convergence and consistent parameter settings for different datasets. These advantages have been demonstrated on phantom studies.


Medical Physics | 2012

Quantitative cone-beam CT imaging in radiation therapy using planning CT as a prior: First patient studies

Tianye Niu; A. Al-Basheer; Lei Zhu

PURPOSE Quantitative cone-beam CT (CBCT) imaging is on increasing demand for high-performance image guided radiation therapy (IGRT). However, the current CBCT has poor image qualities mainly due to scatter contamination. Its current clinical application is therefore limited to patient setup based on only bony structures. To improve CBCT imaging for quantitative use, we recently proposed a correction method using planning CT (pCT) as the prior knowledge. Promising phantom results have been obtained on a tabletop CBCT system, using a correction scheme with rigid registration and without iterations. More challenges arise in clinical implementations of our method, especially because patients have large organ deformation in different scans. In this paper, we propose an improved framework to extend our method from bench to bedside by including several new components. METHODS The basic principle of our correction algorithm is to estimate the primary signals of CBCT projections via forward projection on the pCT image, and then to obtain the low-frequency errors in CBCT raw projections by subtracting the estimated primary signals and low-pass filtering. We improve the algorithm by using deformable registration to minimize the geometry difference between the pCT and the CBCT images. Since the registration performance relies on the accuracy of the CBCT image, we design an optional iterative scheme to update the CBCT image used in the registration. Large correction errors result from the mismatched objects in the pCT and the CBCT scans. Another optional step of gas pocket and couch matching is added into the framework to reduce these effects. RESULTS The proposed method is evaluated on four prostate patients, of which two cases are presented in detail to investigate the method performance for a large variety of patient geometry in clinical practice. The first patient has small anatomical changes from the planning to the treatment room. Our algorithm works well even without the optional iterations and the gas pocket and couch matching. The image correction on the second patient is more challenging due to the effects of gas pockets and attenuating couch. The improved framework with all new components is used to fully evaluate the correction performance. The enhanced image quality has been evaluated using mean CT number and spatial nonuniformity (SNU) error as well as contrast improvement factor. If the pCT image is considered as the ground truth, on the four patients, the overall mean CT number error is reduced from over 300 HU to below 16 HU in the selected regions of interest (ROIs), and the SNU error is suppressed from over 18% to below 2%. The average soft-tissue contrast is improved by an average factor of 2.6. CONCLUSIONS We further improve our pCT-based CBCT correction algorithm for clinical use. Superior correction performance has been demonstrated on four patient studies. By providing quantitative CBCT images, our approach significantly increases the accuracy of advanced CBCT-based clinical applications for IGRT.


Medical Physics | 2014

Iterative image‐domain decomposition for dual‐energy CT

Tianye Niu; Xue Dong; Michael Petrongolo; Lei Zhu

PURPOSE Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. METHODS The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. RESULTS On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. CONCLUSIONS The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.


Physics in Medicine and Biology | 2014

Accelerated barrier optimization compressed sensing (ABOCS) for CT reconstruction with improved convergence

Tianye Niu; Xiaojing Ye; Quentin Fruhauf; Michael Petrongolo; Lei Zhu

Recently, we proposed a new algorithm of accelerated barrier optimization compressed sensing (ABOCS) for iterative CT reconstruction. The previous implementation of ABOCS uses gradient projection (GP) with a Barzilai-Borwein (BB) step-size selection scheme (GP-BB) to search for the optimal solution. The algorithm does not converge stably due to its non-monotonic behavior. In this paper, we further improve the convergence of ABOCS using the unknown-parameter Nesterov (UPN) method and investigate the ABOCS reconstruction performance on clinical patient data. Comparison studies are carried out on reconstructions of computer simulation, a physical phantom and a head-and-neck patient. In all of these studies, the ABOCS results using UPN show more stable and faster convergence than those of the GP-BB method and a state-of-the-art Bregman-type method. As shown in the simulation study of the Shepp-Logan phantom, UPN achieves the same image quality as those of GP-BB and the Bregman-type methods, but reduces the iteration numbers by up to 50% and 90%, respectively. In the Catphan©600 phantom study, a high-quality image with relative reconstruction error (RRE) less than 3% compared to the full-view result is obtained using UPN with 17% projections (60 views). In the conventional filtered-backprojection reconstruction, the corresponding RRE is more than 15% on the same projection data. The superior performance of ABOCS with the UPN implementation is further demonstrated on the head-and-neck patient. Using 25% projections (91 views), the proposed method reduces the RRE from 21% as in the filtered backprojection (FBP) results to 7.3%. In conclusion, we propose UPN for ABOCS implementation. As compared to GP-BB and the Bregman-type methods, the new method significantly improves the convergence with higher stability and fewer iterations.


Medical Physics | 2014

Combined iterative reconstruction and image‐domain decomposition for dual energy CT using total‐variation regularization

Xue Dong; Tianye Niu; Lei Zhu

PURPOSE Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical properties of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. METHODS The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. RESULTS On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. CONCLUSIONS The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.


Medical Physics | 2012

Relationship between x-ray illumination field size and flat field intensity and its impacts on x-ray imaging

Xue Dong; Tianye Niu; Xun Jia; Lei Zhu

PURPOSE X-ray cone-beam CT (CBCT) is being increasingly used for various clinical applications, while its performance is still hindered by image artifacts. This work investigates a new source of reconstruction error, which is often overlooked in the current CBCT imaging. The authors find that the x-ray flat field intensity (I(0)) varies significantly as the illumination volume size changes at different collimator settings. A wrong I(0) value leads to inaccurate CT numbers of reconstructed images as well as wrong scatter measurements in the CBCT research. METHODS The authors argue that the finite size of x-ray focal spot together with the detector glare effect cause the I(0) variation at different illumination sizes. Although the focal spot of commercial x-ray tubes typically has a nominal size of less than 1 mm, the off-focal-spot radiation covers an area of several millimeters on the tungsten target. Due to the large magnification factor from the field collimator to the detector, the penumbra effects of the collimator blades result in different I(0) values for different illumination field sizes. Detector glare further increases the variation, since one pencil beam of incident x-ray is scattered into an area of several centimeters on the detector. In this paper, the authors study these two effects by measuring the focal spot distribution with a pinhole assembly and the detector point spread function (PSF) with an edge-spread function method. The authors then derive a formula to estimate the I(0) value for different illumination field sizes, using the measured focal spot distribution and the detector PSF. Phantom studies are carried out to investigate the accuracy of scatter measurements and CT images with and without considering the I(0) variation effects. RESULTS On our tabletop system with a Varian Paxscan 4030CB flat-panel detector and a Varian RAD-94 x-ray tube as used on a clinical CBCT system, the focal spot distribution has a measured full-width-at-half-maximum (FWHM) of around 0.4 mm, while non-negligible off-focal-spot radiation is observed at a distance of over 2 mm from the center. The measured detector PSF has an FWHM of 0.510 mm, with a shape close to Gaussian. From these two distributions, the author calculate the estimated I(0) values at different collimator settings. The I(0) variation mainly comes from the focal spot effect. The estimation matches well with the measurements at different collimator widths in both horizontal and vertical directions, with an average error of less than 3%. Our method improves the accuracy of conventional scatter measurements, where the scatter is measured as the difference between fan-beam and cone-beam projections. On a uniform water cylinder phantom, more accurate I(0) suppresses the unfaithful high-frequency signals at the object boundaries of the measured scatter, and the SPR estimation error is reduced from 0.158 to 0.014. The proposed I(0) estimation also reduces the reconstruction error from about 20 HU on the Catphan©600 phantom in the selected regions of interest to less than 4 HU. CONCLUSIONS The I(0) variation is identified as one additional error source in x-ray imaging. By measuring the focal-spot distribution and detector PSF, the authors propose an accurate method of estimating the I(0) value for different illumination field sizes. The method obtains more accurate scatter measurements and therefore facilitates scatter correction algorithm designs. As correction methods for other CBCT artifacts become more successful, our research is significant in further improving the CBCT imaging accuracy.


Computational and Mathematical Methods in Medicine | 2013

Low-dose and scatter-free cone-beam CT imaging using a stationary beam blocker in a single scan: phantom studies.

Xue Dong; Michael Petrongolo; Tianye Niu; Lei Zhu

Excessive imaging dose from repeated scans and poor image quality mainly due to scatter contamination are the two bottlenecks of cone-beam CT (CBCT) imaging. Compressed sensing (CS) reconstruction algorithms show promises in recovering faithful signals from low-dose projection data but do not serve well the needs of accurate CBCT imaging if effective scatter correction is not in place. Scatter can be accurately measured and removed using measurement-based methods. However, these approaches are considered unpractical in the conventional FDK reconstruction, due to the inevitable primary loss for scatter measurement. We combine measurement-based scatter correction and CS-based iterative reconstruction to generate scatter-free images from low-dose projections. We distribute blocked areas on the detector where primary signals are considered redundant in a full scan. Scatter distribution is estimated by interpolating/extrapolating measured scatter samples inside blocked areas. CS-based iterative reconstruction is finally carried out on the undersampled data to obtain scatter-free and low-dose CBCT images. With only 25% of conventional full-scan dose, our method reduces the average CT number error from 250 HU to 24 HU and increases the contrast by a factor of 2.1 on Catphan 600 phantom. On an anthropomorphic head phantom, the average CT number error is reduced from 224 HU to 10 HU in the central uniform area.


Advances in radiation oncology | 2016

Viability of Noncoplanar VMAT for liver SBRT compared with coplanar VMAT and beam orientation optimized 4π IMRT

K Woods; Dan Nguyen; A Tran; V Yu; Minsong Cao; Tianye Niu; Percy Lee; Ke Sheng

Purpose The 4π static noncoplanar radiation therapy delivery technique has demonstrated better normal tissue sparing and dose conformity than the clinically used volumetric modulated arc therapy (VMAT). It is unclear whether this is a fundamental limitation of VMAT delivery or the coplanar nature of its typical clinical plans. The dosimetry and the limits of normal tissue toxicity constrained dose escalation of coplanar VMAT, noncoplanar VMAT and 4π radiation therapy are quantified in this study. Methods and materials Clinical stereotactic body radiation therapy plans for 20 liver patients receiving 30 to 60 Gy using coplanar VMAT (cVMAT) were replanned using 3 to 4 partial noncoplanar arcs (nVMAT) and 4π with 20 intensity modulated noncoplanar fields. The conformity number, homogeneity index, 50% dose spillage volume, normal liver volume receiving >15 Gy, dose to organs at risk (OARs), and tumor control probability were compared for all 3 treatment plans. The maximum tolerable dose yielding a normal liver normal tissue control probability <1%, 5%, and 10% was calculated with the Lyman-Kutcher-Burman model for each plan as well as the resulting survival fractions at 1, 2, 3, and 4 years. Results Compared with cVMAT, the nVMAT and 4π plans reduced liver volume receiving >15 Gy by an average of 5 cm3 and 80 cm3, respectively. 4π reduced the 50% dose spillage volume by ∼23% compared with both VMAT plans, and either significantly decreased or maintained OAR doses. The 4π maximum tolerable doses and survival fractions were significantly higher than both cVMAT and nVMAT (P < .05) for all normal liver normal tissue control probability limits used in this study. Conclusions The 4π technique provides significantly better OAR sparing than both cVMAT and nVMAT and enables more clinically relevant dose escalation for tumor local control. Therefore, despite the current accessibility of nVMAT, it is not a viable alternative to 4π for liver SBRT.


Physics in Medicine and Biology | 2015

Iterative CT shading correction with no prior information

Pengwei Wu; Xiaonan Sun; Hongjie Hu; Tingyu Mao; Wei Zhao; Ke Sheng; Alice A Cheung; Tianye Niu

Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.

Collaboration


Dive into the Tianye Niu's collaboration.

Top Co-Authors

Avatar

L Zhu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Zhu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xue Dong

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Petrongolo

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Zhao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ke Sheng

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge