Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruijiang Li is active.

Publication


Featured researches published by Ruijiang Li.


Medical Physics | 2010

Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

Ruijiang Li; Xun Jia; John H. Lewis; Xuejun Gu; M Folkerts; Chunhua Men; S Jiang

PURPOSE To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. METHODS Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. RESULTS The average relative image intensity error of the reconstructed volumetric images is 6.9%±2.4%. The average 3D tumor localization error is 0.8±0.5mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). CONCLUSIONS The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.


IEEE Transactions on Biomedical Engineering | 2012

Accurate Respiration Measurement Using DC-Coupled Continuous-Wave Radar Sensor for Motion-Adaptive Cancer Radiotherapy

Changzhan Gu; Ruijiang Li; Hualiang Zhang; Albert Y. C. Fung; Carlos Torres; S Jiang; Changzhi Li

Accurate respiration measurement is crucial in motion-adaptive cancer radiotherapy. Conventional methods for respiration measurement are undesirable because they are either invasive to the patient or do not have sufficient accuracy. In addition, measurement of external respiration signal based on conventional approaches requires close patient contact to the physical device which often causes patient discomfort and undesirable motion during radiation dose delivery. In this paper, a dc-coupled continuous-wave radar sensor was presented to provide a noncontact and noninvasive approach for respiration measurement. The radar sensor was designed with dc-coupled adaptive tuning architectures that include RF coarse-tuning and baseband fine-tuning, which allows the radar sensor to precisely measure movement with stationary moment and always work with the maximum dynamic range. The accuracy of respiration measurement with the proposed radar sensor was experimentally evaluated using a physical phantom, human subject, and moving plate in a radiotherapy environment. It was shown that respiration measurement with radar sensor while the radiation beam is on is feasible and the measurement has a submillimeter accuracy when compared with a commercial respiration monitoring system which requires patient contact. The proposed radar sensor provides accurate, noninvasive, and noncontact respiration measurement and therefore has a great potential in motion-adaptive radiotherapy.


Physics in Medicine and Biology | 2009

4D CT sorting based on patient internal anatomy

Ruijiang Li; John H. Lewis; L Cervino; S Jiang

Respiratory motion during free-breathing computed tomography (CT) scan may cause significant errors in target definition for tumors in the thorax and upper abdomen. A four-dimensional (4D) CT technique has been widely used for treatment simulation of thoracic and abdominal cancer radiotherapy. The current 4D CT techniques require retrospective sorting of the reconstructed CT slices oversampled at the same couch position. Most sorting methods depend on external surrogates of respiratory motion recorded by extra instruments. However, respiratory signals obtained from these external surrogates may not always accurately represent the internal target motion, especially when irregular breathing patterns occur. We have proposed a new sorting method based on multiple internal anatomical features for multi-slice CT scan acquired in the cine mode. Four features are analyzed in this study, including the air content, lung area, lung density and body area. We use a measure called spatial coherence to select the optimal internal feature at each couch position and to generate the respiratory signals for 4D CT sorting. The proposed method has been evaluated for ten cancer patients (eight with thoracic cancer and two with abdominal cancer). For nine patients, the respiratory signals generated from the combined internal features are well correlated to those from external surrogates recorded by the real-time position management (RPM) system (average correlation: 0.95+/-0.02), which is better than any individual internal measures at 95% confidence level. For these nine patients, the 4D CT images sorted by the combined internal features are almost identical to those sorted by the RPM signal. For one patient with an irregular breathing pattern, the respiratory signals given by the combined internal features do not correlate well with those from RPM (correlation: 0.68+/-0.42). In this case, the 4D CT image sorted by our method presents fewer artifacts than that from the RPM signal. Our 4D CT internal sorting method eliminates the need of externally recorded surrogates of respiratory motion. It is an automatic, accurate, robust, cost efficient and yet simple method and therefore can be readily implemented in clinical settings.


Physics in Medicine and Biology | 2010

Markerless lung tumor tracking and trajectory reconstruction using rotational cone-beam projections: a feasibility study

John H. Lewis; Ruijiang Li; W. Tyler Watkins; Joshua D. Lawson; W. Paul Segars; L Cervino; W Song; S Jiang

Algorithms for direct tumor tracking in rotational cone-beam projections and for reconstruction of phase-binned 3D tumor trajectories were developed. The feasibility of the algorithm was demonstrated on a digital phantom, a physical phantom and two patients. Tracking results were obtained by comparing reference templates generated from 4DCT to rotational cone-beam projections. The 95th percentile absolute errors (e(95)) in phantom tracking results did not exceed 1.7 mm in either imager dimension, while e(95) in the patients was 3.3 mm or less. Accurate phase-binned trajectories were reconstructed in each case, with 3D maximum errors of no more than 1.0 mm in the phantoms and 2.0 mm in the patients. This work shows the feasibility of a direct tumor tracking technique for rotational images, and demonstrates that an accurate 3D tumor trajectory can be reconstructed from relatively less accurate tracking results. The ability to reconstruct the tumors average trajectory from a 3D cone-beam CT scan on the day of treatment could allow for better patient setup and quality assurance, while direct tumor tracking in rotational projections could be clinically useful for rotational therapy such as volumetric modulated arc therapy (VMAT).


Physics in Medicine and Biology | 2012

Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

Ho Lee; Lei Xing; Ran Davidi; Ruijiang Li; Jianguo Qian; Rena Lee

Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.


Journal of X-ray Science and Technology | 2011

GPU-based fast low-dose cone beam CT reconstruction via total variation

Xun Jia; Yifei Lou; John E. Lewis; Ruijiang Li; Xuejun Gu; Chunhua Men; W Song; S Jiang

X-ray imaging dose from serial Cone-beam CT (CBCT) scans raises a clinical concern in most image guided radiation therapy procedures. The goal of this paper is to develop a fast GPU-based algorithm to reconstruct high quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. We develop a GPU-friendly version of a forward-backward splitting algorithm to solve this problem. A multi-grid technique is also employed. We test our CBCT reconstruction algorithm on a digital phantom and a head-and-neck patient case. The performance under low mAs is also validated using physical phantoms. It is found that 40 x-ray projections are sufficient to reconstruct CBCT images with satisfactory quality for clinical purposes. Phantom experiments indicate that CBCT images can be successfully reconstructed under 0.1 mAs/projection. Comparing with the widely used head-and-neck scanning protocol of about 360 projections with 0.4 mAs/projection, an overall 36 times dose reduction has been achieved. The reconstruction time is about 130 sec on an NVIDIA Tesla C1060 GPU card, which is estimated ∼ 100 times faster than similar regularized iterative reconstruction approaches.


Medical Physics | 2010

Patient-specific motion artifacts in 4DCT

W. Tyler Watkins; Ruijiang Li; John E. Lewis; Justin C. Park; Ajay Sandhu; S Jiang; W Song

PURPOSE Four-dimensional computed tomography (4DCT) has enhanced images of the thorax and upper abdomen during respiration, but intraphase residual motion artifacts will persist in cine-mode scanning. In this study, the source and magnitude of projection artifacts due to intraphase target motion is investigated. METHODS A theoretical model of geometric uncertainty due to partial projection artifacts in cine-mode 4DCT was derived based on ideal periodic motion. Predicted artifacts were compared to measured errors with a rigid lung phantom attached to a programmable motion platform. Ideal periodic motion and actual patient breathing patterns were used as input for phantom motion. Reconstructed target dimensions were measured along the direction of motion and compared to the actual, known dimensions. RESULTS Artifacts due to intraphase residual motion in cine-mode 4DCT range from a few mm up to a few cm on a given scanner, and can be predicted based on target motion and CT gantry rotation time. Errors in ITV and GTV dimensions were accurately characterized by the theoretical uncertainty at all phases when sinusoidal motion was considered, and in 96% of 300 measurements when patient breathing patterns were used as motion input. When peak-to-peak motion of 1.5 cm is combined with a breathing period of 4 s and gantry rotation time of 1 s, errors due to partial projection artifacts can be greater than 1 cm near midventilation and are a few mm in the inhale and exhale phases. Incorporation of such uncertainty into margin design should be considered in addition to other uncertainties. CONCLUSIONS Artifacts due to intraphase residual motion exist in 4DCT, even for ideal breathing motions (e.g., sine waves). It was determined that these motion artifacts depend on patient-specific tumor motion and CT gantry rotation speed. Thus, if the patient-specific motion parameters are known (i.e., amplitude and period), a patient-specific margin can and should be designed to compensate for this uncertainty.


Medical Physics | 2012

4D cone beam CT via spatiotemporal tensor framelet

Hao Gao; Ruijiang Li; Yuting Lin; Lei Xing

PURPOSE On-board 4D cone beam CT (4DCBCT) offers respiratory phase-resolved volumetric imaging, and improves the accuracy of target localization in image guided radiation therapy. However, the clinical utility of this technique has been greatly impeded by its degraded image quality, prolonged imaging time, and increased imaging dose. The purpose of this letter is to develop a novel iterative 4DCBCT reconstruction method for improved image quality, increased imaging speed, and reduced imaging dose. METHODS The essence of this work is to introduce the spatiotemporal tensor framelet (STF), a high-dimensional tensor generalization of the 1D framelet for 4DCBCT, to effectively take into account of highly correlated and redundant features of the patient anatomy during respiration, in a multilevel fashion with multibasis sparsifying transform. The STF-based algorithm is implemented on a GPU platform for improved computational efficiency. To evaluate the method, 4DCBCT full-fan scans were acquired within 30 s, with a gantry rotation of 200°; STF is also compared with a state-of-art reconstruction method via spatiotemporal total variation regularization. RESULTS Both the simulation and experimental results demonstrate that STF-based reconstruction achieved superior image quality. The reconstruction of 20 respiratory phases took less than 10 min on an NVIDIA Tesla C2070 GPU card. The STF codes are available at https://sites.google.com/site/spatiotemporaltensorframelet. CONCLUSIONS By effectively utilizing the spatiotemporal coherence of the patient anatomy among different respiratory phases in a multilevel fashion with multibasis sparsifying transform, the proposed STF method potentially enables fast and low-dose 4DCBCT with improved image quality.


Physics in Medicine and Biology | 2011

On a PCA-based lung motion model

Ruijiang Li; John H. Lewis; Xun Jia; T Zhao; Weifeng Liu; Sara Wuenschel; J Lamb; Deshan Yang; Daniel A. Low; S Jiang

Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772-81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921-9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1 mm (0.7 ± 0.1 mm). When a single artificial internal marker was used to derive the lung motion, the average 3D error was found to be within 2 mm (1.8 ± 0.3 mm) through comprehensive statistical analysis. The optimal number of PCA coefficients needs to be determined on a patient-by-patient basis and two PCA coefficients seem to be sufficient for accurate modeling of the lung motion for most patients. In conclusion, we have presented thorough theoretical analysis and clinical validation of the PCA lung motion model. The feasibility of deriving the entire lung motion using a single marker has also been demonstrated on clinical data using a simulation approach.


Medical Physics | 2011

3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy

Ruijiang Li; John H. Lewis; Xun Jia; Xuejun Gu; M Folkerts; Chunhua Men; W Song; S Jiang

PURPOSE To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. METHODS Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. RESULTS For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. CONCLUSIONS Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D tumor localization to be on the order of 1 mm on average and 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for lung cancer patients. The results also indicate that the accuracy is not affected by the breathing pattern, be it regular or irregular. High computational efficiency can be achieved on GPU, requiring 0.1-0.3 s for each x-ray projection.

Collaboration


Dive into the Ruijiang Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

S Jiang

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

Yi Cui

Stanford University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

W Song

University of California

View shared research outputs
Top Co-Authors

Avatar

Xun Jia

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

John H. Lewis

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jia Wu

Stanford University

View shared research outputs
Top Co-Authors

Avatar

Xuejun Gu

University of Texas Southwestern Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge