Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Silvio Rizzi is active.

Publication


Featured researches published by Silvio Rizzi.


Surgical Neurology International | 2011

Virtual reality training in neurosurgery: Review of current status and future applications

Ali Alaraj; Michael Lemole; Joshua H. Finkle; Rachel Yudkowsky; Adam Wallace; Cristian Luciano; Pat Banerjee; Silvio Rizzi; Fady T. Charbel

Background: Over years, surgical training is changing and years of tradition are being challenged by legal and ethical concerns for patient safety, work hour restrictions, and the cost of operating room time. Surgical simulation and skill training offer an opportunity to teach and practice advanced techniques before attempting them on patients. Simulation training can be as straightforward as using real instruments and video equipment to manipulate simulated “tissue” in a box trainer. More advanced virtual reality (VR) simulators are now available and ready for widespread use. Early systems have demonstrated their effectiveness and discriminative ability. Newer systems enable the development of comprehensive curricula and full procedural simulations. Methods: A PubMed review of the literature was performed for the MESH words “Virtual reality, “Augmented Reality”, “Simulation”, “Training”, and “Neurosurgery”. Relevant articles were retrieved and reviewed. A review of the literature was performed for the history, current status of VR simulation in neurosurgery. Results: Surgical organizations are calling for methods to ensure the maintenance of skills, advance surgical training, and credential surgeons as technically competent. The number of published literature discussing the application of VR simulation in neurosurgery training has evolved over the last decade from data visualization, including stereoscopic evaluation to more complex augmented reality models. With the revolution of computational analysis abilities, fully immersive VR models are currently available in neurosurgery training. Ventriculostomy catheters insertion, endoscopic and endovascular simulations are used in neurosurgical residency training centers across the world. Recent studies have shown the coloration of proficiency with those simulators and levels of experience in the real world. Conclusion: Fully immersive technology is starting to be applied to the practice of neurosurgery. In the near future, detailed VR neurosurgical modules will evolve to be an essential part of the curriculum of the training of neurosurgeons.


Neurosurgery | 2013

Role of cranial and spinal virtual and augmented reality simulation using immersive touch modules in neurosurgical training.

Ali Alaraj; Fady T. Charbel; Daniel M. Birk; Mathew Tobin; Cristian Luciano; Pat Banerjee; Silvio Rizzi; Jeff Sorenson; Kevin T. Foley; Konstantin V. Slavin; Ben Roitberg

Recent studies have shown that mental script-based rehearsal and simulation-based training improve the transfer of surgical skills in various medical disciplines. Despite significant advances in technology and intraoperative techniques over the last several decades, surgical skills training on neurosurgical operations still carries significant risk of serious morbidity or mortality. Potentially avoidable technical errors are well recognized as contributing to poor surgical outcome. Surgical education is undergoing overwhelming change, as a result of the reduction of work hours and current trends focusing on patient safety and linking reimbursement with clinical outcomes. Thus, there is a need for adjunctive means for neurosurgical training, which is a recent advancement in simulation technology. ImmersiveTouch is an augmented reality system that integrates a haptic device and a high-resolution stereoscopic display. This simulation platform uses multiple sensory modalities, re-creating many of the environmental cues experienced during an actual procedure. Modules available include ventriculostomy, bone drilling, percutaneous trigeminal rhizotomy, and simulated spinal modules such as pedicle screw placement, vertebroplasty, and lumbar puncture. We present our experience with the development of such augmented reality neurosurgical modules and the feedback from neurosurgical residents.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2013

Practice on an augmented reality/haptic simulator and library of virtual brains improves residents' ability to perform a ventriculostomy

Rachel Yudkowsky; Cristian Luciano; Pat Banerjee; Alan Schwartz; Ali Alaraj; G. Michael Lemole; Fady T. Charbel; Kelly Smith; Silvio Rizzi; Richard W. Byrne; Bernard R. Bendok; David M. Frim

Introduction Ventriculostomy is a neurosurgical procedure for providing therapeutic cerebrospinal fluid drainage. Complications may arise during repeated attempts at placing the catheter in the ventricle. We studied the impact of simulation-based practice with a library of virtual brains on neurosurgery residents’ performance in simulated and live surgical ventriculostomies. Methods Using computed tomographic scans of actual patients, we developed a library of 15 virtual brains for the ImmersiveTouch system, a head- and hand-tracked augmented reality and haptic simulator. The virtual brains represent a range of anatomies including normal, shifted, and compressed ventricles. Neurosurgery residents participated in individual simulator practice on the library of brains including visualizing the 3-dimensional location of the catheter within the brain immediately after each insertion. Performance of participants on novel brains in the simulator and during actual surgery before and after intervention was analyzed using generalized linear mixed models. Results Simulator cannulation success rates increased after intervention, and live procedure outcomes showed improvement in the rate of successful cannulation on the first pass. However, the incidence of deeper, contralateral (simulator) and third-ventricle (live) placements increased after intervention. Residents reported that simulations were realistic and helpful in improving procedural skills such as aiming the probe, sensing the pressure change when entering the ventricle, and estimating how far the catheter should be advanced within the ventricle. Conclusions Simulator practice with a library of virtual brains representing a range of anatomies and difficulty levels may improve performance, potentially decreasing complications due to inexpert technique.


Neurosurgery | 2013

Percutaneous spinal fixation simulation with virtual reality and haptics

Cristian Luciano; Pat Banerjee; Jeffery M. Sorenson; Kevin T. Foley; Sameer A. Ansari; Silvio Rizzi; Anand V. Germanwala; Leonard I. Kranzler; Prashant Chittiboina; Ben Roitberg

BACKGROUND In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. OBJECTIVE To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. METHODS Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patients thoracic spine derived from an actual patient computed tomography data set. RESULTS Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. CONCLUSION The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool.


Neurological Research | 2014

Virtual reality spine surgery simulation: an empirical study of its usefulness

Jaime Gasco; Achal Patel; Juan Ortega-Barnett; Daniel Branch; Sohum Desai; Yong Fang Kuo; Cristian Luciano; Silvio Rizzi; Patrick Kania; Martin Matuyauskas; Pat Banerjee; Ben Roitberg

Abstract Objective: This study explores the usefulness of virtual simulation training for learning to place pedicle screws in the lumbar spine. Methods: Twenty-six senior medical students anonymously participated and were randomized into two groups (A = no simulation; B = simulation). Both groups were given 15 minutes to place two pedicle screws in a sawbones model. Students in Group A underwent traditional visual/verbal instruction whereas students in Group B underwent training on pedicle screw placement in the ImmersiveTouch® simulator. The students in both groups then placed two pedicle screws each in a lumbar sawbones models that underwent triplanar thin slice computerized tomography and subsequent analysis based on coronal entry point, axial and sagittal deviations, length error, and pedicle breach. The average number of errors per screw was calculated for each group. Semi-parametric regression analysis for clustered data was used with generalized estimating equations accommodating a negative binomial distribution to determine any statistical difference of significance. Results: A total of 52 pedicle screws were analyzed. The reduction in the average number of errors per screw after a single session of simulation training was 53·7% (P  =  0·0067). The average number of errors per screw in the simulation group was 0·96 versus 2·08 in the non-simulation group. The simulation group outperformed the non-simulation group in all variables measured. The three most benefited measured variables were length error (86·7%), coronal error (71·4%), and pedicle breach (66·7%). Conclusions: Computer-based simulation appears to be a valuable teaching tool for non-experts in a highly technical procedural task such as pedicle screw placement that involves sequential learning, depth perception, and understanding triplanar anatomy.


Neurosurgery | 2013

Sensory and motor skill testing in neurosurgery applicants: a pilot study using a virtual reality haptic neurosurgical simulator.

Ben Roitberg; Pat Banerjee; Cristian Luciano; Martin Matulyauskas; Silvio Rizzi; Patrick Kania; Jaime Gasco

BACKGROUND:Manual skill is important for surgeons, but current methods to evaluate sensory-motor skills in applicants to a surgical residency are limited.OBJECTIVE:To develop a method of testing sensory-motor skill using objective and reproducible virtual reality simulation.METHODS:We designed a set


Neurological Research | 2014

Neurosurgical tactile discrimination training with haptic-based virtual reality simulation

Achal Patel; Nick Koshy; Juan Ortega-Barnett; Hoi C. Chan; Yong Fan Kuo; Cristian Luciano; Silvio Rizzi; Martin Matulyauskas; Patrick Kania; Pat Banerjee; Jaime Gasco

Abstract Objective: To determine if a computer-based simulation with haptic technology can help surgical trainees improve tactile discrimination using surgical instruments. Material and Methods: Twenty junior medical students participated in the study and were randomized into two groups. Subjects in Group A participated in virtual simulation training using the ImmersiveTouch simulator (ImmersiveTouch, Inc., Chicago, IL, USA) that required differentiating the firmness of virtual spheres using tactile and kinesthetic sensation via haptic technology. Subjects in Group B did not undergo any training. With their visual fields obscured, subjects in both groups were then evaluated on their ability to use the suction and bipolar instruments to find six elastothane objects with areas ranging from 1·5 to 3·5 cm2 embedded in a urethane foam brain cavity model while relying on tactile and kinesthetic sensation only. Results: A total of 73·3% of the subjects in Group A (simulation training) were able to find the brain cavity objects in comparison to 53·3% of the subjects in Group B (no training) (P  =  0·0183). There was a statistically significant difference in the total number of Group A subjects able to find smaller brain cavity objects (size ≤ 2·5 cm2) compared to that in Group B (72·5 vs 40%, P  =  0·0032). On the other hand, no significant difference in the number of subjects able to detect larger objects (size ≧ 3 cm2) was found between Groups A and B (75 vs 80%, P  =  0·7747). Conclusion: Virtual computer-based simulators with integrated haptic technology may improve tactile discrimination required for microsurgical technique.


conference on automation science and engineering | 2007

GPU-based elastic-object deformation for enhancement of existing haptic applications

Cristian Luciano; Pat Banerjee; Silvio Rizzi

Most haptic libraries allow user to feel the resistance of a flexible virtual object by the implementation of a point-based collision detection algorithm and a spring-damper model. Even though the user can feel the deformation at the contact point, the graphics library renders a rigid geometry, causing a conflict of senses in the users mind. In most cases, the CPU utilization is maximized to achieve the required 1-kHz haptic frame rate without leaving any additional resource to also deform the geometry, while on the other hand, the Graphics Processing Unit (GPU) is underutilized. This paper proposes a computationally inexpensive and efficient GPU-based methodology to significantly enhance user perception of large existing haptic applications without compromising the original haptic feedback. To the best of our knowledge, this is the first implemented algorithm that is able to maintain a graphics frame rate of approximately 60 Hz as well as a haptics frame rate of 1 Khz when deforming complex geometry of approximately 160K vertices. The implementation of the algorithm in a virtual reality neurosurgical simulator has been successful to handle, in real time, complex 3D isosurfaces created from medical MRI and CT images.


international conference on conceptual structures | 2013

Estimation of Volume Rendering Efficiency with GPU in a Parallel Distributed Environment

Cristian Federico Perez Monte; Fabiana Piccoli; Cristian Luciano; Silvio Rizzi; Germán BIanchini; Paola Caymes Scutari

Visualization methods of medical imagery based on volumetric data constitute a fundamental tool for medical diagnosis, training and pre-surgical planning. Often, large volume sizes and/or the complexity of the required computations present serious obstacles for reaching higher levels of realism and real-time performance. Performance and efficiency are two critical aspects in traditional algorithms based on complex lighting models. To overcome these problems, a volume rendering algorithm, PD-Render intra for individual networked nodes in a parallel distributed architecture with a single GPU per node is presented in this paper. The implemented algorithm is able to achieve photorealistic rendering as well as a high signal-tonoise ratio at interactive frame rates. Experiments show excellent results in terms of efficiency and performance for rendering medical volumes in real time. c


conference on automation science and engineering | 2007

Automating the Extraction of 3D Models from Medical Images for Virtual Reality and Haptic Simulations

Silvio Rizzi; Pat Banerjee; Cristian Luciano

The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models are obtained from segmentation and their triangle reduction is required for graphics and haptics rendering. This paper focuses on creating an integrated interface between Sensimmer and medical imaging devices, using available software. Existing tools are evaluated, as well as aspects that require further development are identified. Solutions to overcome limitations and increase the degree of automation of the process are examined.

Collaboration


Dive into the Silvio Rizzi's collaboration.

Top Co-Authors

Avatar

Cristian Luciano

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Pat Banerjee

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Achal Patel

University of Texas Medical Branch

View shared research outputs
Top Co-Authors

Avatar

Ali Alaraj

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Fady T. Charbel

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Jaime Gasco

University of Texas Medical Branch

View shared research outputs
Top Co-Authors

Avatar

Juan Ortega-Barnett

University of Texas Medical Branch

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rachel Yudkowsky

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge