Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerhard Kleinszig is active.

Publication


Featured researches published by Gerhard Kleinszig.


Medical Physics | 2011

Mobile C-arm cone-beam CT for guidance of spine surgery: Image quality, radiation dose, and integration with interventional guidance

Sebastian Schafer; Sajendra Nithiananthan; Daniel J. Mirota; Ali Uneri; J. W. Stayman; Wojciech Zbijewski; C Schmidgunst; Gerhard Kleinszig; A. J. Khanna; Jeffrey H. Siewerdsen

PURPOSE A flat-panel detector based mobile isocentric C-arm for cone-beam CT (CBCT) has been developed to allow intraoperative 3D imaging with sub-millimeter spatial resolution and soft-tissue visibility. Image quality and radiation dose were evaluated in spinal surgery, commonly relying on lower-performance image intensifier based mobile C-arms. Scan protocols were developed for task-specific imaging at minimum dose, in-room exposure was evaluated, and integration of the imaging system with a surgical guidance system was demonstrated in preclinical studies of minimally invasive spine surgery. METHODS Radiation dose was assessed as a function of kilovolt (peak) (80-120 kVp) and milliampere second using thoracic and lumbar spine dosimetry phantoms. In-room radiation exposure was measured throughout the operating room for various CBCT scan protocols. Image quality was assessed using tissue-equivalent inserts in chest and abdomen phantoms to evaluate bone and soft-tissue contrast-to-noise ratio as a function of dose, and task-specific protocols (i.e., visualization of bone or soft-tissues) were defined. Results were applied in preclinical studies using a cadaveric torso simulating minimally invasive, transpedicular surgery. RESULTS Task-specific CBCT protocols identified include: thoracic bone visualization (100 kVp; 60 mAs; 1.8 mGy); lumbar bone visualization (100 kVp; 130 mAs; 3.2 mGy); thoracic soft-tissue visualization (100 kVp; 230 mAs; 4.3 mGy); and lumbar soft-tissue visualization (120 kVp; 460 mAs; 10.6 mGy) - each at (0.3  ×  0.3  ×  0.9 mm3 ) voxel size. Alternative lower-dose, lower-resolution soft-tissue visualization protocols were identified (100 kVp; 230 mAs; 5.1 mGy) for the lumbar region at (0.3  ×  0.3  ×  1.5 mm3 ) voxel size. Half-scan orbit of the C-arm (x-ray tube traversing under the table) was dosimetrically advantageous (prepatient attenuation) with a nonuniform dose distribution (∼2 ×  higher at the entrance side than at isocenter, and ∼3-4 lower at the exit side). The in-room dose (microsievert) per unit scan dose (milligray) ranged from ∼21 μSv/mGy on average at tableside to ∼0.1 μSv/mGy at 2.0 m distance to isocenter. All protocols involve surgical staff stepping behind a shield wall for each CBCT scan, therefore imparting ∼zero dose to staff. Protocol implementation in preclinical cadaveric studies demonstrate integration of the C-arm with a navigation system for spine surgery guidance-specifically, minimally invasive vertebroplasty in which the system provided accurate guidance and visualization of needle placement and bone cement distribution. Cumulative dose including multiple intraoperative scans was ∼11.5 mGy for CBCT-guided thoracic vertebroplasty and ∼23.2 mGy for lumbar vertebroplasty, with dose to staff at tableside reduced to ∼1 min of fluoroscopy time (∼40-60 μSv), compared to 5-11 min for the conventional approach. CONCLUSIONS Intraoperative CBCT using a high-performance mobile C-arm prototype demonstrates image quality suitable to guidance of spine surgery, with task-specific protocols providing an important basis for minimizing radiation dose, while maintaining image quality sufficient for surgical guidance. Images demonstrate a significant advance in spatial resolution and soft-tissue visibility, and CBCT guidance offers the potential to reduce fluoroscopy reliance, reducing cumulative dose to patient and staff. Integration with a surgical guidance system demonstrates precise tracking and visualization in up-to-date images (alleviating reliance on preoperative images only), including detection of errors or suboptimal surgical outcomes in the operating room.


Physics in Medicine and Biology | 2012

Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery.

Yoshito Otake; Sebastian Schafer; J. W. Stayman; Wojciech Zbijewski; Gerhard Kleinszig; Rainer Graumann; A. J. Khanna; Jeffrey H. Siewerdsen

Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.


Physics in Medicine and Biology | 2014

Soft-tissue imaging with C-arm cone-beam CT using statistical reconstruction.

Adam S. Wang; J. Webster Stayman; Yoshito Otake; Gerhard Kleinszig; Sebastian Vogt; Gary L. Gallia; A. Jay Khanna; Jeffrey H. Siewerdsen

The potential for statistical image reconstruction methods such as penalized-likelihood (PL) to improve C-arm cone-beam CT (CBCT) soft-tissue visualization for intraoperative imaging over conventional filtered backprojection (FBP) is assessed in this work by making a fair comparison in relation to soft-tissue performance. A prototype mobile C-arm was used to scan anthropomorphic head and abdomen phantoms as well as a cadaveric torso at doses substantially lower than typical values in diagnostic CT, and the effects of dose reduction via tube current reduction and sparse sampling were also compared. Matched spatial resolution between PL and FBP was determined by the edge spread function of low-contrast (∼ 40-80 HU) spheres in the phantoms, which were representative of soft-tissue imaging tasks. PL using the non-quadratic Huber penalty was found to substantially reduce noise relative to FBP, especially at lower spatial resolution where PL provides a contrast-to-noise ratio increase up to 1.4-2.2 × over FBP at 50% dose reduction across all objects. Comparison of sampling strategies indicates that soft-tissue imaging benefits from fully sampled acquisitions at dose above ∼ 1.7 mGy and benefits from 50% sparsity at dose below ∼ 1.0 mGy. Therefore, an appropriate sampling strategy along with the improved low-contrast visualization offered by statistical reconstruction demonstrates the potential for extending intraoperative C-arm CBCT to applications in soft-tissue interventions in neurosurgery as well as thoracic and abdominal surgeries by overcoming conventional tradeoffs in noise, spatial resolution, and dose.


Medical Physics | 2012

Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

Hao Dang; Yoshito Otake; Sebastian Schafer; J. W. Stayman; Gerhard Kleinszig; Jeffrey H. Siewerdsen

PURPOSE Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. METHODS Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. RESULTS The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all anatomical sites, including challenging scenarios involving the presence of interventional tools. The reprojection error of marker localization was independent of the distance of the ARM from isocenter, and the overall TRE was dominated by the configuration of individual fiducials and distance from the target as predicted by theory. The median TRE increased with greater ARM-to-isocenter distance (e.g., for the Free-Form method, TRE increasing from 0.78 mm to 2.04 mm at distances of ∼75 mm and 370 mm, respectively). The median TRE within ∼200 mm distance was consistently lower than that of the manual method (TRE = 0.82 mm). Registration performance was independent of anatomical site (head, thorax, and abdomen). The Free-Form method demonstrated a statistically significant improvement (p = 0.0044) in reproducibility compared to manual registration (0.22 mm versus 0.30 mm, respectively). CONCLUSIONS Automatic image-to-world registration methods demonstrate the potential for improved accuracy, reproducibility, and workflow in CBCT-guided procedures. A Free-Form method was shown to exhibit robustness against anatomical site, with comparable or improved TRE compared to manual registration. It was also comparable or superior in performance to a Known-Model method in which the ARM configuration is specified as a predefined tool, thereby allowing configuration of fiducials on the fly or attachment to the patient.


Physics in Medicine and Biology | 2016

3D-2D image registration for target localization in spine surgery: Investigation of similarity metrics providing robustness to content mismatch

T. De Silva; Ali Uneri; M. D. Ketcha; S. Reaungamornrat; Gerhard Kleinszig; Sebastian Vogt; Nafi Aygun; Sheng Fu Lo; Jean Paul Wolinsky; Jeffrey H. Siewerdsen

In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the registration accuracy and robustness in the presence of strong image content mismatch. This capability could offer valuable assistance and decision support in spine level localization in a manner consistent with clinical workflow.


Spine | 2015

Automatic Localization of Target Vertebrae in Spine Surgery: Clinical Evaluation of the LevelCheck Registration Algorithm

Sheng Fu L Lo; Yoshito Otake; Varun Puvanesarajah; Adam S. Wang; Ali Uneri; Tharindu De Silva; Sebastian Vogt; Gerhard Kleinszig; Benjamin D. Elder; C. Rory Goodwin; Thomas A. Kosztowski; Jason Liauw; Mari L. Groves; Ali Bydon; Daniel M. Sciubba; Timothy F. Witham; Jean Paul Wolinsky; Nafi Aygun; Ziya L. Gokaslan; Jeffrey H. Siewerdsen

Study Design. A 3-dimensional-2-dimensional (3D-2D) image registration algorithm, “LevelCheck,” was used to automatically label vertebrae in intraoperative mobile radiographs obtained during spine surgery. Accuracy, computation time, and potential failure modes were evaluated in a retrospective study of 20 patients. Objective. To measure the performance of the LevelCheck algorithm using clinical images acquired during spine surgery. Summary of Background Data. In spine surgery, the potential for wrong level surgery is significant due to the difficulty of localizing target vertebrae based solely on visual impression, palpation, and fluoroscopy. To remedy this difficulty and reduce the risk of wrong-level surgery, our team introduced a program (dubbed LevelCheck) to automatically localize target vertebrae in mobile radiographs using robust 3D-2D image registration to preoperative computed tomographic (CT) scan. Methods. Twenty consecutive patients undergoing thoracolumbar spine surgery, for whom both a preoperative CT scan and an intraoperative mobile radiograph were available, were retrospectively analyzed. A board-certified neuroradiologist determined the “true” vertebra levels in each radiograph. Registration of the preoperative CT scan to the intraoperative radiograph was calculated via LevelCheck, and projection distance errors were analyzed. Five hundred random initializations were performed for each patient, and algorithm settings (viz, the number of robust multistarts, ranging 50–200) were varied to evaluate the trade-off between registration error and computation time. Failure mode analysis was performed by individually analyzing unsuccessful registrations (>5 mm distance error) observed with 50 multistarts. Results. At 200 robust multistarts (computation time of ∼26 s), the registration accuracy was 100% across all 10,000 trials. As the number of multistarts (and computation time) decreased, the registration remained fairly robust, down to 99.3% registration accuracy at 50 multistarts (computation time ∼7 s). Conclusion. The LevelCheck algorithm correctly identified target vertebrae in intraoperative mobile radiographs of the thoracolumbar spine, demonstrating acceptable computation time, compatibility with routinely obtained preoperative CT scans, and warranting investigation in prospective studies. Level of Evidence: N/A


Physics in Medicine and Biology | 2014

3D–2D registration for surgical guidance: effect of projection view angles on registration accuracy

Ali Uneri; Yoshito Otake; Adam S. Wang; Gerhard Kleinszig; Sebastian Vogt; A. J. Khanna; Jeffrey H. Siewerdsen

An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.


medical image computing and computer assisted intervention | 1999

Visualization for Planning and Simulation of Minimally Invasive Neurosurgical Procedures

Ludwig M. Auer; Arne Radetzky; C. Wimmer; Gerhard Kleinszig; F. Schroecker; Dorothee P. Auer; Hervé Delingette; Brian L. Davies; Dietrich Peter Pretschner

A unit for training and simulation as well as for planning of minimally invasive operations of the brain is presented, called ROBO-SIM, for virtual patient positioning, planning of the surgical approach to a target in the depth, for anatomical orientation in the operating field and for manipulations of the virtual tissue. Methods are described for the visualization of the outer body surface (head), the virtual operating field in the depth of the brain, virtual surgical instruments and deformations (displacement, fragmentation) of the virtual tissue by simulated surgical manipulations. Mainly volume-rendered data from 3D-MRI are used. Surface views of the whole dataset are used for virtual patient positioning. For virtual endoscopic visualization of the small operating field within cavities of the brain (ventricles, cysts), a volume rendering application of “Volumizer” has been developed, called “Flight-Volumizer”. Elastodynamic tissue deformation is achieved by a viscoelastic model on the surface of the cavities, which is simulated by neuro-fuzzy systems.


Medical Physics | 2012

Deformable registration of the inflated and deflated lung in cone-beam CT-guided thoracic surgery: initial investigation of a combined model- and image-driven approach.

Ali Uneri; Sajendra Nithiananthan; Sebastian Schafer; Yoshito Otake; J. Webster Stayman; Gerhard Kleinszig; Marc S. Sussman; Jerry L. Prince; Jeffrey H. Siewerdsen

PURPOSE Surgical resection is the preferred modality for curative treatment of early stage lung cancer, but localization of small tumors (<10 mm diameter) during surgery presents a major challenge that is likely to increase as more early-stage disease is detected incidentally and in low-dose CT screening. To overcome the difficulty of manual localization (fingers inserted through intercostal ports) and the cost, logistics, and morbidity of preoperative tagging (coil or dye placement under CT-fluoroscopy), the authors propose the use of intraoperative cone-beam CT (CBCT) and deformable image registration to guide targeting of small tumors in video-assisted thoracic surgery (VATS). A novel algorithm is reported for registration of the lung from its inflated state (prior to pleural breach) to the deflated state (during resection) to localize surgical targets and adjacent critical anatomy. METHODS The registration approach geometrically resolves images of the inflated and deflated lung using a coarse model-driven stage followed by a finer image-driven stage. The model-driven stage uses image features derived from the lung surfaces and airways: triangular surface meshes are morphed to capture bulk motion; concurrently, the airways generate graph structures from which corresponding nodes are identified. Interpolation of the sparse motion fields computed from the bounding surface and interior airways provides a 3D motion field that coarsely registers the lung and initializes the subsequent image-driven stage. The image-driven stage employs an intensity-corrected, symmetric form of the Demons method. The algorithm was validated over 12 datasets, obtained from porcine specimen experiments emulating CBCT-guided VATS. Geometric accuracy was quantified in terms of target registration error (TRE) in anatomical targets throughout the lung, and normalized cross-correlation. Variations of the algorithm were investigated to study the behavior of the model- and image-driven stages by modifying individual algorithmic steps and examining the effect in comparison to the nominal process. RESULTS The combined model- and image-driven registration process demonstrated accuracy consistent with the requirements of minimally invasive VATS in both target localization (∼3-5 mm within the target wedge) and critical structure avoidance (∼1-2 mm). The model-driven stage initialized the registration to within a median TRE of 1.9 mm (95% confidence interval (CI) maximum = 5.0 mm), while the subsequent image-driven stage yielded higher accuracy localization with 0.6 mm median TRE (95% CI maximum = 4.1 mm). The variations assessing the individual algorithmic steps elucidated the role of each step and in some cases identified opportunities for further simplification and improvement in computational speed. CONCLUSIONS The initial studies show the proposed registration method to successfully register CBCT images of the inflated and deflated lung. Accuracy appears sufficient to localize the target and adjacent critical anatomy within ∼1-2 mm and guide localization under conditions in which the target cannot be discerned directly in CBCT (e.g., subtle, nonsolid tumors). The ability to directly localize tumors in the operating room could provide a valuable addition to the VATS arsenal, obviate the cost, logistics, and morbidity of preoperative tagging, and improve patient safety. Future work includes in vivo testing, optimization of workflow, and integration with a CBCT image guidance system.


Spine | 2009

Insertion of the artificial disc replacement: a cadaver study comparing the conventional surgical technique and the use of a navigation system.

M. Rauschmann; John S. Thalgott; Madilyne Fogarty; Manos Nichlos; Gerhard Kleinszig; Mariusz Knap; Konstantinos Kafchitsas

Study Design. Comparison of total disc replacement (TDR) with and without computer-assisted surgical navigation. Objective. To test and evaluate the accuracy of computer-assisted navigation for the lumbar spine by comparing the traditional C-arm-aided insertion of an arthroplasty device to the navigation-aided insertion of the implant. Summary of Background Data. Previous studies have shown that poor placement of the CHARITÉ disc can be correlated to worse clinical results. Because of parallax effect, exclusive use of fluoroscopy could make placement of the artificial disc less accurate. False positioning may also lead to spondylolisthesis, disc degeneration of the adjacent segment, subsidence of the disc, and failure of the implant. Methods. Ten human cadaver spine specimens were used at 3 lumbar segments (L3–L4, L4–L5, and L5–S1). Before implantation, all artificial discs were planned for “ideal” placement on a digital computed tomography image. Fifteen lumbar intervertebral disc prostheses (Depuy, Raynham, MA) were placed using Vector Vision image guidance (BrainLAB AG, Munich, Germany), by an inexperienced TDR-surgeon. Fifteen lumbar intervertebral disc prostheses were placed with exclusive use of fluoroscopy by an experienced TDR-surgeon. After insertion, DICOM computed tomography scans were analyzed using computer software to assess placement accuracy of each disc prosthesis. Results. The navigated placement of the disc was significantly more accurate. Only 3 navigated disc prostheses were suboptimal and none was poorly placed. Conclusion. Surgical computer-assisted navigation may be a useful tool in the hands of a spine surgeon to achieve more accurate placement of the disc prosthesis. Because of the parallax effect, computer-assisted navigation offers more placement accuracy than stan- dard fluoroscopy. Because the accurate placement of total disc prosthesis has been correlated with better clinical outcome, further study regarding the navigation of the TDR is essential.

Collaboration


Dive into the Gerhard Kleinszig's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Uneri

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

M. D. Ketcha

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Jacobson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Goerres

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge