Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Balazs Vagvolgyi is active.

Publication


Featured researches published by Balazs Vagvolgyi.


Urology | 2009

Augmented Reality During Robot-assisted Laparoscopic Partial Nephrectomy: Toward Real-Time 3D-CT to Stereoscopic Video Registration

Li-Ming Su; Balazs Vagvolgyi; Rahul Agarwal; Carol E. Reiley; Russell H. Taylor; Gregory D. Hager

OBJECTIVES To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. METHODS Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. RESULTS Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. CONCLUSIONS Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.


international conference on robotics and automation | 2009

Tissue property estimation and graphical display for teleoperated robot-assisted surgery

Tomonori Yamamoto; Balazs Vagvolgyi; Kamini Balaji; Louis L. Whitcomb; Allison M. Okamura

Manual palpation of tissue and organs during a surgical procedure provides clinicians with valuable information for diagnosis and surgical planning. In present-day robotassisted minimally invasive surgery systems, lack of perceptible haptic feedback makes it challenging to detect a tumor in an organ or a calcified artery in heart tissue. This study presents an automated tissue property estimation method and a real-time graphical overlay that allow an operator to discriminate hard and soft tissues. We first evaluate experimentally the properties of an artificial tissue and compare seven possible mathematical tissue models. Self-validation as well as cross-validation confirm that the Hunt-Crossley model best describes the experimentally observed phantom tissue properties and is suitable for our purpose. Second, we present the development of a system in which the phantom tissue is palpated using a teleoperated surgical robot, and the stiffness of the Hunt-Crossly model is estimated in real time by recursive least squares. A real-time visual overlay representing tissue stiffness is created using a hue-saturation-luminance representation on a semi-transparent disc at the tissue surface. Hue depicts the stiffness at a palpated point and saturation is calculated based on distance from the point. A simple interpolation technique creates a continuous stiffness color map. In an experiment, the graphical overlay successfully shows the location of an artificial calcified artery hidden in phantom tissue.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2008

Force-Feedback Surgical Teleoperator: Controller Design and Palpation Experiments

Mohsen Mahvash; James C. Gwilliam; Rahul Agarwal; Balazs Vagvolgyi; Li-Ming Su; David D. Yuh; Allison M. Okamura

In this paper, we develop and test a 6-degree-of-freedom surgical teleoperator that has four possible modes of operation: (1) direct force feedback, (2) graphical force feedback, (3) direct and graphical force feedback together, and (4) no force feedback. In all cases, visual feedback of the: environment is provided via a head-mounted display. A position-position controller with local dynamic compensators provides the direct force feedback. The graphical force feedback is overlaid on the environment image, and displays a bar whose height and color is related to the environment force estimated using the current applied to the actuators of the patient-side arm. We evaluate the performance of the teleoperator modes in assisting a user to find the location of stiff objects hidden inside a soft material, similar to a calcified artery hidden in heart tissue and a tumor in the prostate. Seven people used the teleoperator t:o perform palpation in these materials. Results showed that direct force feedback mode minimizes palpation task error for the heart model.


international conference on robotics and automation | 2009

Effects of haptic and graphical force feedback on teleoperated palpation

James C. Gwilliam; Mohsen Mahvash; Balazs Vagvolgyi; Alexander Vacharat; David D. Yuh; Allison M. Okamura

Direct haptic feedback and graphical force feedback have both been hypothesized to improve the performance of robot-assisted surgery. In this study we evaluate the benefits of haptic and graphical force feedback on surgeon performance and tissue exploration behavior during a teleoperated palpation task of artificial tissues. Seven surgeon subjects (four experienced in robot-assisted surgery) used a 7-degree-of-freedom teleoperated surgical robot to identify a comparatively rigid rigid target object (representing a calcified artery) in phantom heart models using the following feedback conditions: (1) direct haptic and graphical feedback, (2) direct haptic only, (3) graphical feedback only, and (4) no feedback. To avoid the problems of force sensing in a minimally invasive surgical environment, we use a position-exchange controller with dynamics compensation for direct haptic feedback and a force estimator displayed via tool-tip tracking bar graph for graphical force feedback. Although the transparency of the system is limited with this approach, results show that direct haptic force feedback minimizes applied forces to the tissue, while coupled haptic and graphical force feedback minimizes subject task error. For experienced surgeons, haptic force feedback substantially reduced task error independent of graphical feedback.


International Journal of Medical Robotics and Computer Assisted Surgery | 2012

Assessing system operation skills in robotic surgery trainees

Rajesh Kumar; Amod Jog; Anand Malpani; Balazs Vagvolgyi; David D. Yuh; Hiep T. Nguyen; Gregory D. Hager; Chi Chiung Grace Chen

With increased use of robotic surgery in specialties including urology, development of training methods has also intensified. However, current approaches lack the ability to discriminate between operational and surgical skills.


The Journal of Thoracic and Cardiovascular Surgery | 2012

Objective Measures for Longitudinal Assessment of Robotic Surgery Training

Rajesh Kumar; Amod Jog; Balazs Vagvolgyi; Hiep T. Nguyen; Gregory D. Hager; Chi Chiung Grace Chen; David D. Yuh

OBJECTIVES Current robotic training approaches lack the criteria for automatically assessing and tracking (over time) technical skills separately from clinical proficiency. We describe the development and validation of a novel automated and objective framework for the assessment of training. METHODS We are able to record all system variables (stereo instrument video, hand and instrument motion, buttons and pedal events) from the da Vinci surgical systems using a portable archival system integrated with the robotic surgical system. Data can be collected unsupervised, and the archival system does not change system operations in any way. Our open-ended multicenter protocol is collecting surgical skill benchmarking data from 24 trainees to surgical proficiency, subject only to their continued availability. Two independent experts performed structured (objective structured assessment of technical skills) assessments on longitudinal data from 8 novice and 4 expert surgeons to generate baseline data for training and to validate our computerized statistical analysis methods in identifying the ranges of operational and clinical skill measures. RESULTS Objective differences in operational and technical skill between known experts and other subjects were quantified. The longitudinal learning curves and statistical analysis for trainee performance measures are reported. Graphic representations of the skills developed for feedback to the trainees are also included. CONCLUSIONS We describe an open-ended longitudinal study and automated motion recognition system capable of objectively differentiating between clinical and technical operational skills in robotic surgery. Our results have demonstrated a convergence of trainee skill parameters toward those derived from expert robotic surgeons during the course of our training protocol.


medical image computing and computer assisted intervention | 2012

Hybrid tracking and mosaicking for information augmentation in retinal surgery

Rogério Richa; Balazs Vagvolgyi; Marcin Balicki; Gregory D. Hager; Russell H. Taylor

Current technical limitations in retinal surgery hinder the ability of surgeons to identify and localize surgical targets, increasing operating times and risks of surgical error. In this paper we present a hybrid tracking and mosaicking method for augmented reality in retinal surgery. The system is a combination of direct and feature-based tracking methods. A novel extension for direct visual tracking using a robust image similarity measure in color images is also proposed. Several experiments conducted on phantom, in vivo rabbit and human images attest the ability of the method to cope with the challenging retinal surgery scenario. Applications of the proposed method for tele-mentoring and intra-operative guidance are demonstrated.


workshop on applications of computer vision | 2008

Intraoperative Visualization of Anatomical Targets in Retinal Surgery

Ioana Fleming; Sandrine Voros; Balazs Vagvolgyi; Zachary A. Pezzementi; James T. Handa; Russell H. Taylor; Gregory D. Hager

Certain surgical procedures require a high degree of precise manual control within a very restricted area. Retinal surgeries are part of this group of procedures. During vitreoretinal surgery, the surgeon must visualize, using a microscope, an area spanning a few hundreds of microns in diameter and manually correct the potential pathology using direct contact, free hand techniques. In addition, the surgeon must find an effective compromise between magnification, depth perception, field of view, and clarity of view. Pre-operative images are used to locate interventional targets, and also to assess and plan the surgical procedure. This paper proposes a method of fusing information contained in pre-operative imagery, such as fundus and OCT images, with intra-operative video to increase accuracy in finding the target areas. We describe methods for maintaining, in real-time, registration with anatomical features and target areas using image processing. This registration allows us to produce information enhanced displays that ensure that the retinal surgeon is always in visual contact with his/her area of interest.


bioRxiv | 2018

Closed-Loop Control of Active Sensing Movements Regulates Sensory Slip.

Debojyoti Biswas; Luke A Arend; Sarah A. Stamper; Balazs Vagvolgyi; Eric S. Fortune; Noah J. Cowan

Active sensing involves the production of motor signals for the purpose of acquiring sensory information [1–3]. The most common form of active sensing, found across animal taxa and behaviors, involves the generation of movements—e.g. whisking [4–6], touching [7,8], sniffing [9,10], and eye movements [11]. Active-sensing movements profoundly affect the information carried by sensory feedback pathways [12–15] and are modulated by both top-down goals (e.g. measuring weight vs. texture [1,16]) and bottom-up stimuli (e.g. lights on/off [12]) but it remains unclear if and how these movements are controlled in relation to the ongoing feedback they generate. To investigate the control of movements for active sensing, we created an experimental apparatus for freely swimming weakly electric fish, Eigenmannia virescens, that modulates the gain of reafferent feedback by adjusting the position of a refuge based on real time videographic measurements of fish position. We discovered that fish robustly regulate sensory slip via closed-loop control of active-sensing movements. Specifically, as fish performed the task of maintaining position inside the refuge [17–22], they dramatically up- or down-regulated fore-aft active-sensing movements in relation to a 4-fold change of experimentally modulated reafferent gain. These changes in swimming movements served to maintain a constant magnitude of sensory slip. The magnitude of sensory slip depended on the presence or absence of visual cues. These results indicate that fish use two controllers: one that controls the acquisition of information by regulating feedback from active sensing movements, and another that maintains position in the refuge, a control structure that may be ubiquitous in animals [23,24].


7th International Workshop on Augmented Environments for Computer-Assisted Interventions, AE-CAI 2012, Held in Conjunction with MICCAI 2012 | 2012

Interactive OCT Annotation and Visualization for Vitreoretinal Surgery

Marcin Balicki; Rogério Richa; Balazs Vagvolgyi; Peter Kazanzides; Peter L. Gehlbach; James T. Handa; Jin U. Kang; Russell H. Taylor

Vitreoretinal surgery is an extremely challenging surgical discipline requiring surgeons to use limited visualization to locate and safely operate on particularly delicate eye structures. Intraocular image guidance can potentially aid in localizing retinal targets, such as Epiretinal Membranes. This paper describes a system and methods for localizing difficult to identify anatomical features in the retina using video stereo-microscopy and intraocular OCT. We visually track the retina motion and relative position of a hand-held OCT probe to assemble an M-Scan, the cross-sectional image of the anatomy corresponding to a trajectory of the probe across the retina. The surgeon is then able to interrogate the OCT image during the procedure by pointing a surgical instrument at the M-Scan trajectory superimposed on the retina and displayed in 3D. The system is designed to provide relevant intraoperative imaging to increase surgical precision, and minimize the surgeon’s cognitive load. We describe our system and quantify its performance in a phantom eye.

Collaboration


Dive into the Balazs Vagvolgyi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anton Deguet

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcin Balicki

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James T. Handa

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge