Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Sandor is active.

Publication


Featured researches published by Christian Sandor.


international symposium on mixed and augmented reality | 2001

Design of a component-based augmented reality framework

Martin Bauer; Bernd Bruegge; Gudrun Klinker; Asa MacWilliams; Thomas Reicher; Stefan Riss; Christian Sandor; Martin Wagner

The authors propose a new approach to building augmented reality (AR) systems using a component-based software framework. This has advantages for all parties involved with AR systems. A project manager can reuse existing components in new applications; an end user can reconfigure his system by plugging modules together, an application developer can view the system at a high level of abstraction; and a component developer can focus on technical problems. Our proposed framework consists of reusable distributed services for key subproblems of AR, the middleware to combine them, and an extensible software architecture. We have implemented services for tracking, modeling real and virtual objects, modeling structured navigation or maintenance instructions, and multimodal user interfaces. As a working proof of our concept, we have built an indoor and outdoor campus navigation system using different modes of tracking and user interaction.


ieee virtual reality conference | 2009

Improving Spatial Perception for Augmented Reality X-Ray Vision

Benjamin Avery; Christian Sandor; Bruce H. Thomas

Augmented reality x-ray vision allows users to see through walls and view real occluded objects and locations. We present an augmented reality x-ray vision system that employs multiple view modes to support new visualizations that provide depth cues and spatial awareness to users. The edge overlay visualization provides depth cues to make hidden objects appear to be behind walls, rather than floating in front of them. Utilizing this edge overlay, the tunnel cut-out visualization provides details about occluding layers between the user and remote location. Inherent limitations of these visualizations are addressed by our addition of view modes allowing the user to obtain additional detail by zooming in, or an overview of the environment via an overhead exocentric view.


international symposium on mixed and augmented reality | 2005

Experimental evaluation of an augmented reality visualization for directing a car driver's attention

Marcus Tönnis; Christian Sandor; Gudrun Klinker; Christian Lange; Heiner Bubb

With recent advances of head-up display technology in cars, augmented reality becomes interesting in supporting the driving task to guide a drivers attention. We have set up an experiment to compare two different approaches to inform the driver about dangerous situations around the car. One approach used AR to visualize the source of danger in the drivers frame of reference while the other one presented information in an egocentric frame of reference. Both approaches were evaluated in user tests.


international symposium on mixed and augmented reality | 2010

An Augmented Reality X-Ray system based on visual saliency

Christian Sandor; Andrew Cunningham; Arindam Dey; Ville-Veikko Mattila

In the past, several systems have been presented that enable users to view occluded points of interest using Augmented Reality X-ray visualizations. It is challenging to design a visualization that provides correct occlusions between occluder and occluded objects while maximizing legibility. We have previously published an Augmented Reality X-ray visualization that renders edges of the occluder region over the occluded region to facilitate correct occlusions while providing foreground context. While this approach is simple and works in a wide range of situations, it provides only minimal context of the occluder object.


ubiquitous computing | 2005

A rapid prototyping software infrastructure for user interfaces in ubiquitous augmented reality

Christian Sandor; Gudrun Klinker

Recent user interface concepts, such as multimedia, multimodal, wearable, ubiquitous, tangible, or augmented-reality-based (AR) interfaces, each cover different approaches that are all needed to support complex human–computer interaction. Increasingly, an overarching approach towards building what we call ubiquitous augmented reality (UAR) user interfaces that include all of the just mentioned concepts will be required. To this end, we present a user interface architecture that can form a sound basis for combining several of these concepts into complex systems. We explain in this paper the fundamentals of DWARF’s user interface framework (DWARF standing for distributed wearable augmented reality framework) and an implementation of this architecture. Finally, we present several examples that show how the framework can form the basis of prototypical applications.


international symposium on mixed and augmented reality | 2013

Kinect for interactive AR anatomy learning

Ma Meng; Pascal Fallavollita; Tobias Blum; Ulrich Eck; Christian Sandor; Simon Weidert; Jens Waschke; Nassir Navab

Education of anatomy is a challenging but crucial element in educating medical professionals, but also for general education of pupils. Our research group has previously developed a prototype of an Augmented Reality (AR) magic mirror which allows intuitive visualization of realistic anatomical information on the user. However, the current overlay is imprecise as the magic mirror depends on the skeleton output from Kinect. These imprecisions affect the quality of education and learning. Hence, together with clinicians we have defined bone landmarks which users can touch easily on their body while standing in front of the sensor. We demonstrate that these landmarks allow the proper deformation of medical data within the magic mirror and onto the human body, resulting in a more precise augmentation.


international symposium on mixed and augmented reality | 2009

Physical-virtual tools for spatial augmented reality user interfaces

Michael R. Marner; Bruce H. Thomas; Christian Sandor

This paper presents a new user interface methodology for Spatial Augmented Reality systems. The methodology is based on a set of physical tools that are overloaded with logical functions. Visual feedback presents the logical mode of the tool to the user by projecting graphics onto the physical tools. This approach makes the tools malleable in their functionality, with this change conveyed to the user by changing the projected information. Our prototype application implements a two handed technique allowing an industrial designer to digitally airbrush onto an augmented physical model, masking the paint using a virtualized stencil.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2007

Visuo-Haptic Systems: Half-Mirrors Considered Harmful

Christian Sandor; Shinji Uchiyama; Hiroyuki Yamamoto

In recent years, systems that allow users to see and touch virtual objects in the same space (visuo-haptic systems) are being investigated. Most research projects are employing a half-mirror, while few use a video see-through, head-mounted display (HMD). The work presented in this paper points out advantages of the HMD-based approach. First, we present an experiment that analyzes human performance in a target acquisition task. We have compared a half-mirror system with an HMD system. Our main finding is, that a half-mirror significantly reduces performance. Second, we present an HMD-based painting application, which introduces new interaction techniques that could not be implemented with a half-mirror display. We believe that our findings could inspire other researchers, employing a half-mirror, to reconsider their approach


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2014

Lessons learned: Evaluating visualizations for occluded objects in handheld augmented reality

Arindam Dey; Christian Sandor

Handheld devices like smartphones and tablets have emerged as one of the most promising platforms for Augmented Reality (AR). The increased usage of these portable handheld devices has enabled handheld AR applications to reach the end-users; hence, it is timely and important to seriously consider the user experience of such applications. AR visualizations for occluded objects enable an observer to look through objects. AR visualizations have been predominantly evaluated using Head-Worn Displays (HWDs), handheld devices have rarely been used. However, unless we gain a better understanding of the perceptual and cognitive effects of handheld AR systems, effective interfaces for handheld devices cannot be designed. Similarly, human perception of AR systems in outdoor environments, which provide a higher degree of variation than indoor environments, has only been insufficiently explored. In this paper, we present insights acquired from five experiments we performed using handheld devices in outdoor locations. We provide design recommendations for handheld AR systems equipped with visualizations for occluded objects. Our key conclusions are the following: (1) Use of visualizations for occluded objects improves the depth perception of occluded objects akin to non-occluded objects. (2) To support different scenarios, handheld AR systems should provide multiple visualizations for occluded objects to complement each other. (3) Visual clutter in AR visualizations reduces the visibility of occluded objects and deteriorates depth judgment; depth judgment can be improved by providing clear visibility of the occluded objects. (4) Similar to virtual reality interfaces, both egocentric and exocentric distances are underestimated in handheld AR. (5) Depth perception will improve if handheld AR systems can dynamically adapt their geometric field of view (GFOV) to match the display field of view (DFOV). (6) Large handheld displays are hard to carry and use; however, they enable users to better grasp the depth of multiple graphical objects that are presented simultaneously.


Handbook of Augmented Reality | 2011

Visualization Techniques for Augmented Reality

Denis Kalkofen; Christian Sandor; Sean White; Dieter Schmalstieg

Visualizations in real world environments benefit from the visual interaction between real and virtual imagery. However, compared to traditional visualizations, a number of problems have to be solved in order to achieve effective visualizations within Augmented Reality (AR). This chapter provides an overview of techniques to handle the main obstacles in AR visualizations. It discusses spatial integration of virtual objects within real world environments, techniques to rearrange objects within mixed environments, and visualizations which adapt to its environmental context.

Collaboration


Dive into the Christian Sandor's collaboration.

Top Co-Authors

Avatar

Hirokazu Kato

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takafumi Taketomi

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Goshiro Yamamoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Plopski

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ulrich Eck

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Arindam Dey

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Damien Constantine Rompapas

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Bruce H. Thomas

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Ericson C. Santos

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge