Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yohan Baillot is active.

Publication


Featured researches published by Yohan Baillot.


IEEE Computer Graphics and Applications | 2001

Recent advances in augmented reality

Ronald Azuma; Yohan Baillot; Reinhold Behringer; Steven Feiner; Simon J. Julier; Blair MacIntyre

In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.


Computers & Graphics | 2001

User Interface Management Techniques for Collaborative Mobile Augmented Reality

Tobias Höllerer; Steven Feiner; Drexel Hallaway; Blaine Bell; Marco Lanzagorta; Dennis G. Brown; Simon J. Julier; Yohan Baillot; Lawrence J. Rosenblum

Mobile Augmented Reality Systems (MARS) have the potential to revolutionize the way in which information is provided to users. Virtual information can be directly integrated with the real world surrounding the mobile user, who can interact with it to display related information, to pose and resolve queries, and to collaborate with other users. However, we believe that the benefits of MARS will only be achieved if the user interface (UI) is actively managed so as to maximize the relevance and minimize the confusion of the virtual material relative to the real world. This article addresses some of the steps involved in this process, focusing on the design and layout of the mobile user’s overlaid virtual environment. The augmented view of the user’s surroundings presents an interface to context-dependent operations, many of which are related to the objects in view—the augmented world is the user interface. We present three user interface design techniques that are intended to make this interface as obvious and clear to the user as possible: information filtering, UI component design, and view management. Information filtering helps select the most relevant information to present to the user. UI component designdetermines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screenstabilized components over world-stabilized ones that would need to be exactly registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are displayed visually are arranged appropriately with regard to their projections on the view plane. For example, the relationships among objects should be as unambiguous as possible, and physical or virtual objects should not obstruct the user’s view of more important physical or virtual objects in the scene. We illustrate these interface design techniques using our prototype collaborative, cross-site MARS environment, which is composed of mobile and non-mobile augmented reality and virtual reality systems.


international symposium on wearable computers | 2001

Authoring of physical models using mobile computers

Yohan Baillot; Dennis G. Brown; Simon J. Julier

Context-aware computers rely on user and physical models to describe the context of a user. In this paper, we focus on the problem of developing and maintaining a physical model of the environment using a mobile computer. We describe a set of tools for automatically creating and modifying three-dimensional contextual information. The tools can be utilized across multiple hardware platforms, with different capabilities, and operating in collaboration with one another. We demonstrate the capabilities of the tools using two mobile platforms. One of them, a mobile augmented reality system is used to construct a geometric model of an indoor environment which is then visualized on the same platform.


ieee virtual reality conference | 2006

A Perceptual Matching Technique for Depth Judgments in Optical, See-Through Augmented Reality

J.E. Swan; Mark A. Livingston; Harvey S. Smallman; D. B. Brown; Yohan Baillot; Joseph L. Gabbard; Deborah Hix

A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses techniques for measuring egocentric depth judgments in both virtual and augmented environments. It then describes a perceptual matching task and experimental design for measuring egocentric AR depth judgments at medium- and far-field distances of 5 to 45 meters. The experiment studied the effect of field of view, the x-ray vision condition, multiple distances, and practice on the task. The paper relates some of the findings to the well-known problem of depth underestimation in virtual environments, and further reports evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at 23 meters. It also gives a quantification of how much more difficult the x-ray vision condition makes the task, and then concludes with ideas for improving the experimental methodology.


international symposium on mixed and augmented reality | 2003

A tracker alignment framework for augmented reality

Yohan Baillot; Simon J. Julier; Dennis G. Brown; Mark A. Livingston

To achieve accurate registration, the transformations which locate the tracking system components with respect to the environment must be known. These transformations relate the base of the tracking system to the virtual world and the tracking systems sensor to the graphics display. In this paper we present a unified, general calibration method for calculating these transformations. A user is asked to align the display with objects in the real world. Using this method, the sensor to display and tracker base to world transformations can be determined with as few as three measurements.


Handbook of Augmented Reality | 2011

Military Applications of Augmented Reality

Mark A. Livingston; Lawrence J. Rosenblum; Dennis G. Brown; Gregory S. Schmidt; Simon J. Julier; Yohan Baillot; J. Edward Swan; Zhuming Ai; Paul Maassel

This chapter reviews military benefits and requirements that have led to a series of research efforts in augmented reality (AR) and related systems for the military over the past few decades, beginning with the earliest specific application of AR. While by no means a complete list, we note some themes from the various projects and discuss ongoing research at the Naval Research Laboratory. Two of the most important thrusts within these applications are the user interface and human factors. We summarize our research and place it in the context of the field.


hawaii international conference on system sciences | 2004

A cost-effective usability evaluation progression for novel interactive systems

Deborah Hix; Joseph L. Gabbard; J. E. Swan; Mark A. Livingston; Tobias Höllerer; Simon J. Julier; Yohan Baillot; D. B. Brown

This paper reports on user interface design and evaluation for a mobile, outdoor, augmented reality (AR) application. This novel system, called the battlefield augmented reality system (BARS), supports information presentation and entry for situation awareness in an urban war fighting setting. To our knowledge, this is the first time extensive use of usability engineering has been systematically applied to development of a real-world AR system. Our BARS team has applied a cost-effective progression of usability engineering activities from the very beginning of BARS development. We discuss how we first applied cycles of structured expert evaluations to BARS user interface development, employing user interface mockups representing occluded (non-visible) objects. Then we discuss how results of these evaluations informed our subsequent user-based statistical evaluations and formative evaluations, and present these evaluations and their outcomes. Finally, we discuss how and why this sequence of types of evaluation is cost-effective.


ieee virtual reality conference | 2003

An event-based data distribution mechanism for collaborative mobile augmented reality and virtual environments

Dennis G. Brown; Simon J. Julier; Yohan Baillot; Mark A. Livingston

The full power of mobile augmented and virtual reality systems is realized when these systems are connected to one another to immersive virtual environments, and to remote information servers. Connections are usually made through wireless networks. However, wireless networks cannot guarantee connectivity and their bandwidth can be highly constrained. The authors present a robust event-based data distribution mechanism for mobile augmented reality and virtual environments. It is based on replicated databases, pluggable networking protocols, and communication channels. We demonstrate the mechanism in the Battlefield Augmented Reality System (BARS) situation awareness system, which is composed of several mobile augmented reality systems, immersive and desktop-based virtual reality systems, a 2D map-based multi-modal system, handheld PCs, and other sources of information.


3D synthetic environment reconstruction | 2001

Urban terrain modeling for augmented reality applications

Simon J. Julier; Yohan Baillot; Marco Lanzagorta; Lawrence J. Rosenblum; Dennis G. Brown

Augmented reality (AR) systems have arguably some of the most stringent requirements of any kind of three-dimensional synthetic graphic systems. AR systems register computer graphics (such as annotations, diagrams and models) directly with objects in the real-world. Most of the AR applications require the graphics to be precisely aligned with the environment. For example, if the AR system shows wire frame versions of actual buildings, we cannot afford to see them far apart from the position of the real buildings. To this end, an accurate tracking system and a detailed model of the environment are required. Constructing these models is an extremely challenging task as even a small error in the model (order of tens of centimeters or larger) can lead to significant errors, undermining the effectiveness of an AR system. Also, models of urban structures contain a very large number of different objects (buildings, doors and windows just to name a few). This chapter discusses the problem of developing a detailed synthetic model of an urban environment for a mobile augmented reality system. We review, describe and compare the effectiveness of a number of different modeling paradigms against traditional manual techniques. These techniques include photogrammetry methods (using automatic, semi-automatic and manual segmentation) and 3 dimensional scanning methods (such as aircraft-mounted LIDAR) and conventional manual techniques.


ieee virtual reality conference | 2003

Evaluation of the ShapeTape tracker for wearable, mobile interaction

Yohan Baillot; Joshua J. Eliason; Greg S. Schmidt; J. E. Swan; Dennis G. Brown; Simon J. Julier; Mark A. Livingston; Lawrence J. Rosenblum

We describe two engineering experiments designed to evaluate the effectiveness of Measurands ShapeTape for wearable, mobile interaction. Our initial results suggest that the ShapeTape is not appropriate for interactions which require a high degree of accuracy. However, ShapeTape is capable of reproducing the qualitative motion the user is performing and thus could be used to support 3D gesture-based interaction.

Collaboration


Dive into the Yohan Baillot's collaboration.

Top Co-Authors

Avatar

Simon J. Julier

University College London

View shared research outputs
Top Co-Authors

Avatar

Dennis G. Brown

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark A. Livingston

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lawrence J. Rosenblum

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. E. Swan

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Edward Swan

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar

Greg S. Schmidt

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge