Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wentao Zhu is active.

Publication


Featured researches published by Wentao Zhu.


nuclear science symposium and medical imaging conference | 2015

High quality image reconstruction for short frame in dynamic PET

Wentao Zhu; Mu Chen; Yun Dong; Jun Bao; Hongdi Li

Low count PET studies usually suffer from high image noise and quantitative bias in reconstructed image. This is particularly significant for short frames (temporal-ROI, or TROI) in dynamic PET studies, which in some cases may be as short as 30 seconds or less. In this paper, we proposed a method to improve the quality of the reconstructed short frame by utilizing information from a longer acquisition time that contains the short frame. The data of the longer acquisition excluding the short frame is first sorted to perform a reconstruction. The reconstructed image is then forward projected to obtain its contribution in the data space. A second reconstruction is then executed with data from the entire long acquisition (including the short frame) and the previous contribution in the data space to estimate the activity for the short frame. Results show that the image quality and CRC-noise performance are both improved with our proposed method than the standard reconstruction with counts from the target frame only, as well as the complementary reconstruction method published earlier.


IEEE Transactions on Medical Imaging | 2018

Self-Gating: An Adaptive Center-of-Mass Approach for Respiratory Gating in PET

Tao Feng; Jizhe Wang; Youjun Sun; Wentao Zhu; Yun Dong; Hongdi Li

The goal is to develop an adaptive center-of-mass (COM)-based approach for device-less respiratory gating of list-mode positron emission tomography (PET) data. Our method contains two steps. The first is to automatically extract an optimized respiratory motion signal from the list-mode data during acquisition. The respiratory motion signal was calculated by tracking the location of COM within a volume of interest (VOI). The signal prominence (SP) was calculated based on Fourier analysis of the signal. The VOI was adaptively optimized to maximize SP. The second step is to automatically correct signal-flipping effects. The sign of the signal was determined based on the assumption that the average patient spends more time during expiration than inspiration. To validate our methods, thirty-one 18F-FDG patient scans were included in this paper. An external device-based signal was used as the gold standard, and the correlation coefficient of the data-driven signal with the device-based signal was measured. Our method successfully extracted respiratory signal from 30 out of 31 datasets. The failure case was due to lack of uptake in the field of view. Moreover, our sign determination method obtained correct results for all scans excluding the failure case. Quantitatively, the proposed signal extraction approach achieved a median correlation of 0.85 with the device-based signal. Gated images using optimized data-driven signal showed improved lesion contrast over static image and were comparable to those using device-based signal. We presented a new data-driven method to automatically extract respiratory motion signal from list-mode PET data by optimizing VOI for COM calculation, as well as determine motion direction from signal asymmetry. Successful application of the proposed method on most clinical datasets and comparison with device-based signal suggests its potential of serving as an alternative to external respiratory monitors.


nuclear science symposium and medical imaging conference | 2016

Ultra efficientand robust estimation of the attenuation map in PET imaging

Wentao Zhu; Tao Feng; Mu Chen; Yun Dong; Jun Bao; Hongdi Li

Time-of-Flight (TOF) PET data determines the attenuation map up to a constant. MLAA (Maximum Likelihood Activity and Attenuation Estimation) was proposed for this purpose. However, in real systems the estimated attenuation map usually results in bias and artifacts, due to various factors such as non-uniform timing resolution, detector timing drift, and biased scatter estimation. Moreover, MLAA has much higher computational cost than conventional PET reconstruction. Improving the practical performance of MLAA is important. We proposed an efficient and robust emission-attenuation joint estimation framework, based on the condition that regions with almost uniform attenuation coefficients are segmented. Following the derivation, the update of attenuation map only requires a few weighted additions in sinogram space, which reduces overall computational cost significantly compared with conventional MLAA, as the latter demands at least two backward projections in each iteration. Furthermore, as the parameter space for the attenuation map is reduced and that the math model encourages an averaging effect in reach region, the bias and artifacts due to various factors above can be reduced. We used clinical TOF PET data to evaluate the performance of our proposed method. In each iteration, the computation time for updating the attenuation map was less than 20% of that for conventional MLAA, leading to significant improvement in overall computation efficiency (twice as fast as conventional MLAA). More importantly, the method resulted in high quantitative accuracy. In population study, SUV computed with the estimated attenuation map and the CT based attenuation map had maximum relative error less than 5.7% in multiple VOIs including the spine, liver, kidney, and heart. Our method can be used as a robust and efficient solution to estimate the attenuation map for quantitative PET image reconstruction, based on a single assumption that the attenuation coefficients are similar in each segmented regions. The numerical error caused by treating attenuation coefficients as uniform within each segmented region is acceptable.


nuclear science symposium and medical imaging conference | 2016

Real-time data-driven rigid motion detection and correction for brain scan with listmode PET

Tao Feng; Defu Yang; Wentao Zhu; Yun Dong; Hongdi Li

Patient motion can cause image blurring effects and mismatch of attenuation maps during a brain scan. To detect and correct patient head movement during a PET scan, external devices are usually used. The goal of this study is to present a method that automatically detects and corrects rigid patient motion in real time, without any additional external device using only the listmode data. Within each time interval (1 second) the listmode data were first rebinned into 3D sinogram. First and second moment of the data (expectation and variance) was then calculated for each angle. The mathematical relationship of the expected values, covariance matrix of the three dimensions in the image space and the calculated first and second moment from the listmode data were then derived. The expected values of the three dimensions in the image spaces represent the shift information, and the eigenvector matrix of the covariance matrix represents the rotation information. The acquired motion information was then applied directly on the listmode data to reduce motion effects. Several brain scans were carried out with FDG tracer to evaluate the method. Conventional method using dynamic frames were also implemented to compare with the new method. The measured translation and rotation information over time using both new listmode based approach and conventional dynamic frame approach matched very well. The motion corrected images using the new method shows much reduced blurring effects. This result demonstrated the possibility of applying data-driven approach for detection and correction of patient head movement during PET scans without adding additional hardware and procedure, or additional post processing time.


nuclear science symposium and medical imaging conference | 2016

Joint direct dynamic analysis in dual-tracer PET imaging

Wentao Zhu; Tao Feng; Mu Chen; Yun Dong; Jun Bao; Hongdi Li

Dual-tracer PET imaging may improve overall lesion detectability due to different tracer kinetics. However, separating two tracers in the mixed acquired data is difficult because of unknown individual activity change over time. We proposed an effective and robust method to separate two tracers dynamically, by introducing joint Patlak and Logan analysis in dual-tracer PET imaging. Several patients underwent dual-tracer brain PET/CT scans. The entire scan time was 100 min. 13N ammonia was injected at t=0 min and 18F-FDG was at t=20min. The dual blood input functions were separated and estimated non-invasively from static frame reconstructions, assisted with an exponential model to fit the blood input function after certain elapse of time. Direct Logan analysis was performed for data 0∼20min to generate Logan parametric images for 13N. For the 20∼100 min data, direct Patlak estimation from raw data was performed to generate Patlak parametric images for FDG, with a modified iterative algorithm including the contribution of 13N activities. Results showed that the proposed method is robust and applicable to dual-tracer PET imaging in separation of two tracers. In addition, dynamic images were generated to assist lesion detection. Specifically, in simulation the estimated Logan slope and the Patlak slope yielded relative error 3% for all rods in the NEMA-like phantom. In the application on clinical data, the estimated Logan parametric images and Patlak parametric images revealed distinguished contrasts. The Patlak parametric images estimated with our method also resulted in significantly less noise than conventional image based method, and at the same time preserved the quantitative accuracy, with <5% difference from the ones estimated with the conventional image based estimation method. The proposed Logan and Patlak joint estimation method can be used in dual tracer imaging to separate the two tracers dynamically and torobustly obtain parametric images with higher SNR than conventional methods. It may also be used to improve lesion detectability.


nuclear science symposium and medical imaging conference | 2016

Real-time data-driven respiratory gating with optimized automatic VOI selection

Tao Feng; Wentao Zhu; Zilin Deng; Gang Yang; Youjun Sun; Yun Dong; Jun Bao; Hongdi Li

Data-driven respiratory gating was previous developed to extract respiratory information directly from PET listmode data. It was also shown that different regions may contribute differently to the accuracy of the motion signal, affecting the successful rate of this method. The goal of this study is to develop and evaluate methods that automatically determine the optimum regions to acquire respiratory motion signal. Maximum likely annihilation point was used to map each listmode event to 3D volume space. Parametric-based volume of interests (VOI) was used for selection of events. For a fixed VOI, center of mass (COM) within each fixed time interval was calculated to generate a motion signal. The signal to noise ratio (SNR) of the acquired motion signal was calculated based on Fourier analysis. The optimum VOI was determined by acquiring the motion signal which has the maximum SNR. Twenty patients with 18F-FDG injection were included in this study to evaluate this method. Conventional method which calculates the motion signal using COM without the selection of VOI was also implemented for comparison. External device were also included for several patients to validate our methods. Fourier analysis of the acquired signal shows that while in only 50% of the patients the respiratory signal peaks were clearly visible in Fourier analysis, more than 90% of the patients show clear respiratory signal peaks using the proposed method. High correlation was achieved with signal measured using external device. Gated reconstructed images using the acquire motion signal also showed much reduced motion blurring and attenuation-activity mismatch artifacts. The results suggested that the successful rate of data-driven respiratory gating improved dramatically using the new methods without user intervention and can be achieved real-time during scan, making it viable for clinical application.


Archive | 2018

SYSTEM AND METHOD FOR DETECTING ORGAN MOTION

Tao Feng; Wentao Zhu; Hongdi Li


Molecular Imaging and Biology | 2018

Zero-Extra-Dose PET Delayed Imaging with Data-Driven Attenuation Correction Estimation

Lifang Pang; Wentao Zhu; Yun Dong; Yang Lv; Hongcheng Shi


The Journal of Nuclear Medicine | 2016

Accurate quantification for delayed FDG PET/CT imaging without secondary CT exposure

Wentao Zhu; Mu Chen; Zilin Deng; Yun Dong; Hongdi Li; Hongcheng Shi


The Journal of Nuclear Medicine | 2016

Dual-tracer joint dynamic analysis in brain PET studies for quantitative tissue characterization

Wentao Zhu; Yusheng Su; Yun Dong; Defu Yang; Hongdi Li; Mu Chen; Zhigang Liang

Collaboration


Dive into the Wentao Zhu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yusheng Su

Capital Medical University

View shared research outputs
Top Co-Authors

Avatar

Zhigang Liang

Capital Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge