Niels H. Bakker
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Niels H. Bakker.
Journal of Vascular and Interventional Radiology | 2002
Niels H. Bakker; Dafina Tanase; Jim A. Reekers; C. A. Grimbergen
PURPOSE To provide an objective method to measure the efficiency of vascular and interventional procedures. MATERIALS AND METHODS The time-action analysis method is defined for peripheral vascular and interventional procedures. A taxonomy of actions is defined, geared specifically toward these procedures. The actions are: start-up/wrap-up, exchange, navigate, image, diagnose, treat, handle material, wait, compress puncture site, and unclassified. The recording method and analysis techniques are described. To show the type of data that can be obtained, the time-action analysis of 30 procedures is presented. RESULTS The results provide a detailed picture of the time spent on various actions. Of all actions, the most time is spent on compressing the puncture site (18.5%), whereas the highest frequency of actions are for exchange of catheters, guide wires, and sheaths (20.4 times per procedure). Radiation exposure can be analyzed in detail, which can yield directions for possible reduction. For instance, 5.2%-8.3% of the total radiation exposure occurs during preparation of imaging to adjust the position of the patient table and set the image intensifier diaphragm. CONCLUSION Time-action analysis provides an objective measurement method to monitor and evaluate vascular and interventional procedures. Potential applications and limitations of the technique are discussed.
ieee virtual reality conference | 1998
Niels H. Bakker; Peter J. Werkhoven; Peter O. Passenier
In most applications of virtual environments (VEs), like training and design evaluation, a good sense of orientation is needed in the VE. Orientation performance when moving around in the real world relies on visual as well as proprioceptive feedback. However, the navigation metaphors which are used to move around the VE often lack proprioceptive feedback. Furthermore, the visual feedback in a VE is often relatively poor compared to the visual feedback available in the real world. Therefore, we have quantified the influence of visual and proprioceptive feedback on orientation performance in VEs. Subjects were immersed in a virtual forest and were asked to turn specific angles using three navigation metaphors, differing in the kind of proprioceptive feedback which is provided (no proprioceptive feedback, vestibular feedback, and vestibular and kinesthetic feedback). The results indicate that the most accurate turn performance is found when kinesthetic feedback is present, in a condition where subjects use their legs to turn around. This indicates that incorporating this kind of feedback in navigation metaphors is quite beneficial. Orientation on only the visual component is most inaccurate, leading to progressively larger undershoots for larger angles.
Teleoperators and Virtual Environments | 2001
Niels H. Bakker; Peter J. Werkhoven; Peter O. Passenier
When moving around in the world, humans can use the motion sensations provided by their kinesthetic, vestibular, and visual senses to maintain their sense of direction. Previous research in virtual environments (VEs) has shown that this so-called path integration process is inaccurate in the case that only visual motion stimuli are present, which may lead to disorientation. In an experiment, we investigated whether participants can calibrate this visual path integration process for rotations; in other words, can they learn the relation between visual flow and the angle that they traverse in the VE? Results show that, by providing participants with knowledge of results (KR), they can indeed calibrate the biases in their path integration process, and also maintain their improved level of performance on a retention test the next day.
Medical Physics | 2005
Bart Carelsen; Niels H. Bakker; Simon D. Strackee; Sjirk N. Boon; Mario Maas; Jörg Sabczynski; Cornelis A. Grimbergen; Geert J. Streekstra
Current methods for imaging joint motion are limited to either two-dimensional (2D) video fluoroscopy, or to animated motions from a series of static three-dimensional (3D) images. 3D movement patterns can be detected from biplane fluoroscopy images matched with computed tomography images. This involves several x-ray modalities and sophisticated 2D to 3D matching for the complex wrist joint. We present a method for the acquisition of dynamic 3D images of a moving joint. In our method a 3D-rotational x-ray (3D-RX) system is used to image a cyclically moving joint. The cyclic motion is synchronized to the x-ray acquisition to yield multiple sets of projection images, which are reconstructed to a series of time resolved 3D images, i.e., four-dimensional rotational x ray (4D-RX). To investigate the obtained image quality parameters the full width at half maximum (FWHM) of the point spread function (PSF) via the edge spread function and the contrast to noise ratio between air and phantom were determined on reconstructions of a bullet and rod phantom, using 4D-RX as well as stationary 3D-RX images. The CNR in volume reconstructions based on 251 projection images in the static situation and on 41 and 34 projection images of a moving phantom were 6.9, 3.0, and 2.9, respectively. The average FWHM of the PSF of these same images was, respectively, 1.1, 1.7, and 2.2 mm orthogonal to the motion and parallel to direction of motion 0.6, 0.7, and 1.0 mm. The main deterioration of 4D-RX images compared to 3D-RX images is due to the low number of projection images used and not to the motion of the object. Using 41 projection images seems the best setting for the current system. Experiments on a postmortem wrist show the feasibility of the method for imaging 3D dynamic joint motion. We expect that 4D-RX will pave the way to improved assessment of joint disorders by detection of 3D dynamic motion patterns in joints.
Laryngoscope | 2005
Niels H. Bakker; Peter J. F. M. Lohuis; Dirk Jan Menger; Gilbert J. Nolst Trenité; Wytske J. Fokkens; Cornelis A. Grimbergen
Objectives/Hypothesis: Current methods that measure cross‐sectional areas of the nasal passage on computed tomography (CT) do not determine the minimum cross‐sectional area that may be an important factor in nasal airway resistance. Objective measurement of the dimensions of the nasal passage may help in the diagnosis, as well as the choice and evaluation of surgical treatment for upper airway insufficiencies.
Human Factors | 2003
Niels H. Bakker; Peter O. Passenier; Peter J. Werkhoven
The type of navigation interface in a virtual environment (VE)---head slaved or indirect---determines whether or not proprioceptive feedback stimuli are present during movement. In addition, teleports can be used, which do not provide continuous movement but, rather, discontinuously displace the viewpoint over large distances. A two-part experiment was performed. The first part investigated whether head-slaved navigation provides an advantage for spatial learning in a VE. The second part investigated the role of anticipation when using teleports. The results showed that head-slaved navigation has an advantage over indirect navigation for the acquisition of spatial knowledge in a VE. Anticipating the destination of the teleport prevented disorientation after the displacement to a great extent but not completely. The time that was needed for anticipation increased if the teleport involved a rotation of the viewing direction. This research shows the potential added value of using a head-slaved navigation interface---for example, when using VE for training purposes---and provides practical guidelines for the use of teleports in VE applications.
analysis design and evaluation of human machine systems | 2001
Niels H. Bakker; Peter O. Passenier; Peter J. Werkhoven; Henk G. Stassen; Peter A. Wierioga
Spatial orientation in a Virtual Environment (VE) depends on visual recognition and on path integration meaning that the traversed path is integrated from feedback stimuli (visual, vestibular, and kinesthetic). Which stimuli are available, depends on whether an immersive interface is used with head-slaved movement or a non-immersive interface is used with indirect control of movement. Spatial orientation performance is investigated for both interfaces in two experiments. First, only path integration is investigated with different combinations of visual, vestibular, and kinesthetic feedback. Results show that an immersive interface that provides kinesthetic feedback improves path integration. Second, spatial leaming is investigated for the different interfaces with both visual recognition and path integration. With an immersive interface the VE is leamed faster than with a non-immersive interface. Copyright
Presence: Teleoperators & Virtual Environments | 1999
Niels H. Bakker; Peter J. Werkhoven; Peter O. Passenier
Sensors and Actuators A-physical | 2006
Magdalena K. Chmarra; Niels H. Bakker; C. A. Grimbergen; Jenny Dankelman
Rhinology | 2005
Niels H. Bakker; Wytske J. Fokkens; Cornelis A. Grimbergen