Mikel Sagardia
German Aerospace Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mikel Sagardia.
international conference on robotics and automation | 2011
Thomas Hulin; Katharina Hertkorn; Philipp Kremer; Simon Schätzle; Jordi Artigas; Mikel Sagardia; Franziska Zacharias; Carsten Preusche
This article accompanies a video that presents a bimanual haptic device composed of two DLR/KUKA Light-Weight Robot (LWR) arms. The LWRs have similar dimensions to human arms, and can be operated in torque and position control mode at an update rate of 1 kHz. The two robots are mounted behind the user, such that the intersecting workspace of the robots and the human arms becomes maximal. In order to enhance user interaction, various hand interfaces and additional tactile feedback devices can be used together with the robots. The presented system is equipped with a thorough safety architecture that assures safe operation for human and robot. Additionally, sophisticated control strategies improve performance and guarantee stability. The introduced haptic system is well suited for versatile applications in remote and virtual environments, especially for large unscaled movements.
virtual reality software and technology | 2010
Rene Weller; Mikel Sagardia; David Mainzer; Thomas Hulin; Gabriel Zachmann; Carsten Preusche
We present a benchmarking suite for rigid object collision detection and collision response schemes. The proposed benchmarking suite can evaluate both the performance as well as the quality of the collision response. The former is achieved by densely sampling the configuration space of a large number of highly detailed objects; the latter is achieved by a novel methodology that comprises a number of models for certain collision scenarios. With these models, we compare the force and torque signals both in direction and magnitude. Our device-independent approach allows objective predictions for physically-based simulations as well as 6-DOF haptic rendering scenarios. In the results, we show a comprehensive example application of our benchmarks comparing two quite different algorithms utilizing our proposed benchmarking suite. This proves empirically that our methodology can become a standard evaluation framework.
ieee virtual reality conference | 2012
Mikel Sagardia; Bernhard Weber; Thomas Hulin; Gerd Hirzinger; Carsten Preusche
This work presents an evaluation study of two different collision feedback modalities for virtual assembly verification: visual and force feedback. Forty-three subjects performed several assembly tasks (peg-in-hole, narrow passage) designed with two levels of difficulty. The used haptic rendering algorithm is based on voxel and point data-structures. Both objective - time and collision performance - and subjective measures have been recorded and analyzed. The comparison of the feedback modalities revealed a clear and highly significant superiority of force feedback in virtual assembly scenarios. The objective data shows that whereas the assembly time is similar in most cases for both conditions, force collision feedback yields significantly smaller collision forces, which indicate higher assembly precision. The subjective ratings of the participants define the force feedback condition as the most appropriate for determining clearances and correcting collision configurations, being the best suited modality to predict mountability.
international conference on computer graphics and interactive techniques | 2013
Mikel Sagardia; Thomas Hulin
Collision detection, force computation, and proximity queries are fundamental in interactive gaming, assembly simulations, or virtual prototyping. However, many available methods have to find a trade-off between the accuracy and the high computational speed required by haptics (1 kHz). [McNeely et al. 2006] presented the Voxmap-Pointshell (VPS) Algorithm, which enabled more reliable six-DoF haptic rendering between complex geometries than other approaches based on polygonal data structures. For each colliding object pair, this approach uses (i) a voxelmap or voxelized representation of one object and (ii) a pointshell or point-sampled representation of the other object (see Figure 2). In each cycle, the penetration of the points in the voxelized object is computed, which yields the collision force. [Barbič and James 2008] extended the VPS Algorithm to support deformable objects. This approach builds hierarchical data structures and distance fields that are updated during simulation as the objects deform.
international conference on virtual, augmented and mixed reality | 2013
Bernhard Weber; Mikel Sagardia; Thomas Hulin; Carsten Preusche
In a laboratory study with N = 42 participants (thirty novices and twelve virtual reality (VR) specialists), we evaluated different variants of collision feedback in a virtual environment. Individuals had to perform several object manipulations (peg-in-hole, narrow passage) in a virtual assembly scenario with three different collision feedback modalities (visual vs. vibrotactile vs. force feedback) and two different task complexities (small vs. large peg or wide vs. narrow passage, respectively). The feedback modalities were evaluated in terms of assembly performance (completion time, movement precision) and subjective user ratings. Altogether, results indicate that high resolution force feedback provided by a robotic arm as input device is superior in terms of movement precision, mental workload, and spatial orientation compared to vibrotactile and visual feedback systems.
ieee international conference on biomedical robotics and biomechatronics | 2014
Claudio Castellini; Katharina Hertkorn; Mikel Sagardia; David Sierra González; Markus Nowak
In this paper we evaluate ultrasound imaging as a human-machine interface in the context of rehabilitation. Ultrasound imaging can be used to estimate finger forces in real-time with a short and easy calibration procedure. Forces are individually predicted using a transducer fixed on the forearm, which leaves the hand completely free to operate. In this application, a standard ultrasound machine is connected to a virtual-reality environment in which a human operator can play a dynamic harmonium over two octaves, using either finger (including the thumb). The interaction in the virtual environment is managed via a fast collision detection algorithm and a physics engine. Ten human subjects have been engaged in two games of increasing difficulty. Our experimental results, both objective and subjective, clearly show that both tasks could be accomplished to the required degree of precision and that the subjects underwent a typical learning curve. The learning happened uniformly, irrespective of the required finger, force or note. Such a system could be made portable, and has potential applications as rehabilitation device for amputees and muscle impaired, even at home.
ieee virtual reality conference | 2013
Mikel Sagardia; Katharina Hertkorn; Thomas Hulin; Robin Wolff; Johannes Hummell; Janki Dodiya; Andreas Gerndt
The growth of space debris is becoming a serious problem. There is an urgent need for mitigation measures based on maintenance, repair and de-orbiting technologies. Our video presents a virtual reality framework in which robotic maintenance tasks of satellites can be simulated interactively. The two key components of this framework are a realistic virtual reality simulation and an immersive interaction device. The peculiarity of the virtual reality simulation is the combination of a physics engine based on Bullet with an extremely efficient haptic rendering algorithm inspired by an enhanced version of the Voxmap-Pointshell Algorithm. A central logic module controls all states and objects in the virtual world. To enable the human operator an optimal immersion into the virtual environment, the DLR bimanual haptic device is used as interaction device. Equipped with two light-weight robot arms, this device is able to provide realistic haptic feedback at both human hands, while covering the major part of human operators workspace. The applicability of this system is enhanced by additional force sensors, active hand interfaces with an additional degree of freedom, smart safety technologies and intuitive robot data augmentation. Our platform can be used for verification or training purposes of robotic systems interacting in space environments.
virtual reality software and technology | 2016
Mikel Sagardia; Thomas Hulin; Katharina Hertkorn; Philipp Kremer; Simon Schätzle
We present a virtual reality platform which addresses and integrates some of the currently challenging research topics in the field of virtual assembly: realistic and practical scenarios with several complex geometries, bimanual six-DoF haptic interaction for hands and arms, and intuitive navigation in large workspaces. We put an especial focus on our collision computation framework, which is able to display stiff and stable forces in 1 kHz using a combination of penalty- and constraint-based haptic rendering methods. Interaction with multiple arbitrary geometries is supported in realtime simulations, as well as several interfaces, allowing for collaborative training experiences. Performance results for an exemplary car assembly sequence which show the readiness of the system are provided.
virtual reality software and technology | 2016
Mikel Sagardia; Thomas Hulin
Collision detection and force computation between complex geometries are essential technologies for virtual reality and robotic applications. Penalty-based haptic rendering algorithms provide a fast collision computation solution, but they cannot avoid the undesired interpenetration between virtual objects, and have difficulties with thin non-watertight geometries. God object methods or constraint-based haptic rendering approaches have shown to solve this problem, but are typically complex to implement and computationally expensive. This paper presents an easy-to-implement god object approach applied to six-DoF penalty-based haptic rendering algorithms. Contact regions are synthesized to penalty force and torque values and these are used to compute the position of the god object on the surface. Then, the pose of this surface proxy is used to render stiff and stable six-DoF contacts with friction. Independently of the complexity of the used geometries, our implementation runs in only around 5 μs and the results show a maximal penetration error of the resolution used in the penalty-based haptic rendering algorithm.
international conference on robotics and automation | 2016
Korbinian Nottensteiner; Mikel Sagardia; Andreas Stemmer; Christoph Hermann Borst
The observation of robotic assembly tasks is required as feedback for decisions and adaption of the task execution on the current situation. A sequential Monte Carlo observation algorithm is proposed, which uses a fast and accurate collision detection algorithm as a reference model for the contacts between complex shaped parts. The main contribution of the paper is the extension of the classic random motion model in the propagation step with sampling methods known from the domain of probabilistic roadmap planning in order to increase the sample density in narrow passages of the configuration space. As a result, the observation performance can be improved and a risk of sample impoverishment reduced. Experimental validation is provided for a peg-in-hole task executed by a lightweight-robot arm equipped with joint torque sensors.