Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramin Shahidi is active.

Publication


Featured researches published by Ramin Shahidi.


Computer Aided Surgery | 2000

Comparative tracking error analysis of five different optical tracking systems

Rasool Khadem; Clement C. Yeh; Mohammad Sadeghi-Tehrani; Michael R. Bax; Jeremy A. Johnson; Jacqueline Nerney Welch; Eric P. Wilkinson; Ramin Shahidi

OBJECTIVE Effective utilization of an optical tracking system for image-based surgical guidance requires optimal placement of the dynamic reference frame (DRF) with respect to the tracking camera. Unlike other studies that measure the overall accuracy of a particular navigation system, this study investigates the precision of one component of the navigation system: the optical tracking system (OTS). The precision of OTS measurements is quantified as jitter. By measuring jitter, one can better understand how system inaccuracies depend on the position of the DRF with respect to the camera. MATERIALS AND METHODS Both FlashPointtrade mark (Image Guided Technologies, Inc., Boulder, Colorado) and Polaristrade mark (Northern Digital Inc., Ontario, Canada) optical tracking systems were tested in five different camera and DRF configurations. A linear testing apparatus with a software interface was designed to facilitate data collection. Jitter measurements were collected over a single quadrant within the camera viewing volume, as symmetry was assumed about the horizontal and vertical axes. RESULTS Excluding the highest 5% of jitter, the FlashPoint cameras had an RMS jitter range of 0.028 +/- 0.012 mm for the 300 mm model, 0.051 +/- 0.038 mm for the 580 mm model, and 0.059 +/- 0.047 mm for the 1 m model. The Polaris camera had an RMS jitter range of 0.058 +/- 0.037 mm with an active DRF and 0.115 +/- 0.075 mm with a passive DRF. CONCLUSION Both FlashPoint and Polaris have jitter less than 0.11 mm, although the error distributions differ significantly. Total jitter for all systems is dominated by the component measured in the axis directed away from the camera.


IEEE Transactions on Medical Imaging | 2002

Implementation, calibration and accuracy testing of an image-enhanced endoscopy system

Ramin Shahidi; Michael R. Bax; Calvin R. Maurer; Jeremy A. Johnson; Eric P. Wilkinson; Bai Wang; Jay B. West; Martin J. Citardi; Kim Manwaring; Rasool Khadem

This paper presents a new method for image-guided surgery called image-enhanced endoscopy. Registered real and virtual endoscopic images (perspective volume renderings generated from the same view as the endoscope camera using a preoperative image) are displayed simultaneously; when combined with the ability to vary tissue transparency in the virtual images, this provides surgeons with the ability to see beyond visible surfaces and, thus, provides additional exposure during surgery. A mount with four photoreflective spheres is rigidly attached to the endoscope and its position and orientation is tracked using an optical position sensor. Generation of virtual images that are accurately registered to the real endoscopic images requires calibration of the tracked endoscope. The calibration process determines intrinsic parameters (that represent the projection of three-dimensional points onto the two-dimensional endoscope camera imaging plane) and extrinsic parameters (that represent the transformation from the coordinate system of the tracker mount attached to the endoscope to the coordinate system of the endoscope camera), and determines radial lens distortion. The calibration routine is fast, automatic, accurate and reliable, and is insensitive to rotational orientation of the endoscope. The routine automatically detects, localizes, and identifies dots in a video image snapshot of the calibration target grid and determines the calibration parameters from the sets of known physical coordinates and localized image coordinates of the target grid dots. Using nonlinear lens-distortion correction, which can be performed at real-time rates (30 frames per second), the mean projection error is less than 0.5 mm at distances up to 25 mm from the endoscope tip, and less than 1.0 mm up to 45 mm. Experimental measurements and point-based registration error theory show that the tracking error is about 0.5-0.7 mm at the tip of the endoscope and less than 0.9 mm for all points in the field of view of the endoscope camera at a distance of up to 65 mm from the tip. It is probable that much of the projection error is due to endoscope tracking error rather than calibration error. Two examples of clinical applications are presented to illustrate the usefulness of image-enhanced endoscopy. This method is a useful addition to conventional image-guidance systems, which generally show only the position of the tip (and sometimes the orientation) of a surgical instrument or probe on reformatted image slices.


Medical Physics | 2005

Design and application of an assessment protocol for electromagnetic tracking systems.

Johann Hummel; Michael R. Bax; Michael Figl; Yan Kang; Calvin R. Maurer; Wolfgang Birkfellner; Helmar Bergmann; Ramin Shahidi

This paper defines a simple protocol for competitive and quantified evaluation of electromagnetic tracking systems such as the NDI Aurora (A) and Ascension microBIRD with dipole transmitter (B). It establishes new methods and a new phantom design which assesses the reproducibility and allows comparability with different tracking systems in a consistent environment. A machined base plate was designed and manufactured in which a 50 mm grid of holes was precisely drilled for position measurements. In the center a circle of 32 equispaced holes enables the accurate measurement of rotation. The sensors can be clamped in a small mount which fits into pairs of grid holes on the base plate. Relative positional/orientational errors are found by subtracting the known distances/rotations between the machined locations from the differences of the mean observed positions/rotation. To measure the influence of metallic objects we inserted rods made of steel (SST 303, SST 416), aluminum, and bronze into the sensitive volume between sensor and emitter. We calculated the fiducial registration error and fiducial location error with a standard stylus calibration for both tracking systems and assessed two different methods of stylus calibration. The positional jitter amounted to 0.14 mm(A) and 0.08 mm(B). A relative positional error of 0.96mm±0.68mm, range -0.06 mm; 2.23 mm(A) and 1.14mm±0.78mm, range -3.72 mm; 1.57 mm(B) for a given distance of 50 mm was found. The relative rotation error was found to be 0.51° (A)/0.04° (B). The most relevant distortion caused by metallic objects results from SST 416. The maximum error 4.2mm(A)∕⩾100mm(B) occurs when the rod is close to the sensor(20 mm). While (B) is more sensitive with respect to metallic objects, (A) is less accurate concerning orientation measurements. (B) showed a systematic error when distances are calculated.


Physics in Medicine and Biology | 2006

Evaluation of a new electromagnetic tracking system using a standardized assessment protocol

Johann Hummel; Michael Figl; Wolfgang Birkfellner; Michael R. Bax; Ramin Shahidi; Calvin R. Maurer; Helmar Bergmann

This note uses a published protocol to evaluate a newly released 6 degrees of freedom electromagnetic tracking system (Aurora, Northern Digital Inc.). A practice for performance monitoring over time is also proposed. The protocol uses a machined base plate to measure relative error in position and orientation as well as the influence of metallic objects in the operating volume. Positional jitter (E(RMS)) was found to be 0.17 mm +/- 0.19 mm. A relative positional error of 0.25 mm +/- 0.22 mm at 50 mm offsets and 0.97 mm +/- 1.01 mm at 300 mm offsets was found. The mean of the relative rotation error was found to be 0.20 degrees +/- 0.14 degrees with respect to the axial and 0.91 degrees +/- 0.68 degrees for the longitudinal rotation. The most significant distortion caused by metallic objects is caused by 400-series stainless steel. A 9.4 mm maximum error occurred when the rod was closest to the emitter, 10 mm away. The improvement compared to older generations of the Aurora with respect to accuracy is substantial.


workshop on biomedical image registration | 2003

Evaluation of Intensity-Based 2D-3D Spine Image Registration Using Clinical Gold-Standard Data

Daniel B. Russakoff; Torsten Rohlfing; Anthony Ho; Daniel H. Kim; Ramin Shahidi; John R. Adler; Calvin R. Maurer

In this paper, we evaluate the accuracy and robustness of intensity-based 2D-3D registration for six image similarity measures using clinical gold-standard spine image data from four patients. The gold-standard transformations are obtained using four bone-implanted fiducial markers. The three best similarity measures are mutual information, cross correlation, and gradient correlation. The mean target registration errors for these three measures range from 1.3 to 1.5 mm. We believe this is the first reported evaluation using clinical gold-standard data.


Medical Imaging 2003: Image Processing | 2003

Fast calculation of digitally reconstructed radiographs using light fields

Daniel B. Russakoff; Torsten Rohlfing; Daniel Rueckert; Ramin Shahidi; Daniel H. Kim; Calvin R. Maurer

Calculating digitally reconstructed radiographs (DRRs)is an important step in intensity-based fluoroscopy-to-CT image registration methods. Unfortunately, the standard techniques to generate DRRs involve ray casting and run in time O(n3),where we assume that n is approximately the size (in voxels) of one side of the DRR as well as one side of the CT volume. Because of this, generation of DRRs is typically the rate-limiting step in the execution time of intensity-based fluoroscopy-to-CT registration algorithms. We address this issue by extending light field rendering techniques from the computer graphics community to generate DRRs instead of conventional rendered images. Using light fields allows most of the computation to be performed in a preprocessing step;after this precomputation step, very accurate DRRs can be generated in time O(n2). Using a light field generated from 1,024 DRRs of resolution 256×256, we can create new DRRs that appear visually identical to ones generated by conventional ray casting. Importantly, the DRRs generated using the light field are computed over 300 times faster than DRRs generated using conventional ray casting(50 vs.17,000 ms on a PC with a 2 GHz Intel Pentium 4 processor).


Neurosurgery | 2008

Image-Guided Lateral Suboccipital Approach: Part 1—Individualized Landmarks for Surgical Planning

Alireza Gharabaghi; Steffen K. Rosahl; Günther C. Feigl; Thomas Liebig; Javad M. Mirzayan; Stefan Heckl; Ramin Shahidi; Marcos Tatagiba; Madjid Samii

OBJECTIVE Being situated close to the transverse and sigmoid sinus, the asterion has traditionally been viewed as a landmark for surgical approaches to the posterior fossa. Cadaveric studies, however, have shown its variability in relation to underlying anatomic structures. We have used an image-guidance technology to determine the precise anatomic relationship between the asterion and the underlying transverse-sigmoid sinus transition (TST) complex in patients scheduled for posterior fossa surgery. The applicability of three-dimensional (3-D) volumetric image-rendering for presurgical anatomic identification and individualization of a surgical landmark was evaluated. METHODS One-millimeter computed tomographic slices were combined with venous computed tomographic angiography in 100 patients, allowing for 3-D volumetric image-rendering of the cranial bone and the dural vasculature at the same time. The spatial relationship between the asterion and the TST was recorded bilaterally by using opacity modulation of the bony surface. The location of both the asterion and the TST could be confirmed during surgery in all of these patients. RESULTS It was possible to accurately visualize the asterion and the sinuses in a single volumetrically rendered 3-D image in more than 90% of the patients. The variability in the anatomic position of the asterion as shown in cadaveric studies was confirmed, providing an individualized landmark for the patients. In this series, the asterion was located from 2 mm medial to 7 mm lateral and from 10 mm inferior to 17 mm superior to the TST, respectively. CONCLUSION Volumetric image-rendering allows for precise in vivo measurements of anatomic distances in 3-D space. It is also a valuable tool for assessing the validity of traditional surgical landmarks and individualizing them for surgical planning.


Neurosurgery | 2005

Volumetric image guidance for motor cortex stimulation: integration of three-dimensional cortical anatomy and functional imaging.

Alireza Gharabaghi; Dieter Hellwig; Steffen K. Rosahl; Ramin Shahidi; Christoph Schrader; Hans-Joachim Freund; Madjid Samii

OBJECTIVE: Epidural electrical stimulation of the motor cortex is a promising treatment option in patients with intractable pain. Varying rates of success in long-term pain relief have been attributed to inaccurate positioning of the electrode array, partly because the sulcal landmarks are not directly visualized. We describe an integrated protocol for precise electrode placement, combining functional image guidance and intraoperative electrical stimulation in the awake patient. METHODS: Volumetric rendering of a three-dimensional (3-D) magnetic resonance data set was used to visualize the cortical surface and to superimpose functional magnetic resonance imaging data in six patients with refractory chronic pain. The intraoperative positioning of the quadripolar electrode array was monitored by functional 3-D image guidance. Continuous electrophysiological monitoring and clinical assessment of the motor effects complemented the procedure. RESULTS: Volumetrically rendered 3-D images were advantageous for the location of the burr hole over the perirolandic area by revealing individual cortical morphological features (e.g., the hand knob) and function at the same time. The exact position of the electrodes was verified reliably by cortical stimulation. No complications were observed throughout the procedures. CONCLUSION: The combination of 3-D functional neuronavigation, intraoperative electrical stimulation, and continuous motor output monitoring in awake patients provides optimal information for the identification of the appropriate somatotopic area of motor cortex. This combined imaging and stimulation approach for electrode positioning offers a safe and minimal invasive strategy for the treatment of intractable chronic pain in selected patients.


internaltional ultrasonics symposium | 2000

A real-time freehand 3D ultrasound system for image-guided surgery

Jacqueline Nerney Welch; Jeremy A. Johnson; Michael R. Bax; Rana Badr; Ramin Shahidi

Current freehand 3D ultrasound techniques separate the scanning or acquisition step from the visualization step. The process leads to a single image volume dataset that can be rendered for viewing later. While satisfactory for diagnostic purposes, the method is not useful for surgical guidance where the anatomy must be visualized in real time. The Image Guidance Laboratories are currently developing a freehand 3D ultrasound system that will allow real-time updates to the scanned volume data as well as the capability to simultaneously view cross-sections through the volume and a volume-rendered perspective view. The equipment used is not unlike other freehand 3D ultrasound systems: an optical tracking system for locating the position and orientation of the ultrasound probe, a video frame grabber for capturing ultrasound frames, and a high-performance computer for performing real-time volume updates and volume rendering. The system incorporates novel methods for inserting new frames into, and removing expired frames from, the volume dataset in real time. This paper reports on current work in progress, and focuses on methods unique to achieving real-time 3D visualization using freehand 3D ultrasound.


Operative Neurosurgery | 2008

Image-Guided Lateral Suboccipital Approach: Part 2—Impact on Complication Rates and Operation Times

Alireza Gharabaghi; Steffen K. Rosahl; Günther C. Feigl; Sam Safavi-Abbasi; Javad M. Mirzayan; Stefan Heckl; Ramin Shahidi; Marcos Tatagiba; Madjid Samii

OBJECTIVE Image-guidance systems are widely available for surgical planning and intraoperative navigation. Recently, three-dimensional volumetric image rendering technology that increasingly applies in navigation systems to assist neurosurgical planning, e.g., for cranial base approaches. However, there is no systematic clinical study available that focuses on the impact of this image-guidance technology on outcome parameters in suboccipital craniotomies. METHODS A total of 200 patients with pathologies located in the cerebellopontine angle were reviewed, 100 of whom underwent volumetric neuronavigation and 100 of whom underwent treatment without intraoperative image guidance. This retrospective study analyzed the impact of image guidance on complication rates (venous sinus injury, venous air embolism, postoperative morbidity caused by venous air embolism) and operation times for the lateral suboccipital craniotomies performed with the patient in the semi-sitting position. RESULT This study demonstrated a 4% incidence of injury to the transverse-sigmoid sinus complex in the image-guided group compared with a 15% incidence in the non-image-guided group. Venous air embolisms were detected in 8% of the image-guided patients and in 19% of the non-image-guided patients. These differences in terms of complication rates were significant for both venous sinus injury and venous air embolism (P < 0.05). There was no difference in postoperative morbidity secondary to venous air embolism between both groups. The mean time for craniotomy was 21 minutes in the image-guided group and 39 minutes in non-image-guided group (P = 0.036). CONCLUSION Volumetric image guidance provides fast and reliable three-dimensional visualization of sinus anatomy in the posterior fossa, thereby significantly increasing speed and safety in lateral suboccipital approaches.

Collaboration


Dive into the Ramin Shahidi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Madjid Samii

Hannover Medical School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge