Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julian Looser is active.

Publication


Featured researches published by Julian Looser.


international conference on computer graphics and interactive techniques | 2004

Augmenting the science centre and museum experience

Eric Woods; Mark Billinghurst; Julian Looser; Graham Aldridge; Deidre Brown; Barbara Garrie; Claudia Nelles

Recent advances in computer graphics and interactive techniques have increased the visual quality and flexibility of Augmented Reality (AR) applications. This, in turn has increased the viability of applying AR to educational exhibits for use in Science Centres, Museums, Libraries and other education centres. This article outlines a selection of five projects developed at the Human Interface Technology Laboratory in New Zealand (HIT Lab NZ) that have explored different techniques for applying AR to educational exhibits.These exhibits have received very positive feedback and appear to have educational benefits involving spatial, temporal and contextual conceptualisation and provide kinaesthetic, explorative and knowledge-challenging stimulus. The controls available to a user of turning a page, moving an AR marker, moving their head and moving a slider provide sufficient freedom to create many interaction scenarios that can serve educative outcomes. While the use of virtual media provides many advantages, creating new content is still quite difficult, requiring specialist software and skills. Useability observations are shared.


international conference on computer graphics and interactive techniques | 2004

Through the looking glass: the use of lenses as an interface tool for Augmented Reality interfaces

Julian Looser; Mark Billinghurst; Andy Cockburn

In this paper we present new interaction techniques for virtual environments. Based on an extension of 2D MagicLenses, we have developed techniques involving 3D lenses, information filtering and semantic zooming. These techniques provide users with a natural, tangible interface for selectively zooming in and out of specific areas of interest in an Augmented Reality scene. They use rapid and fluid animation to help users assimilate the relationship between views of detailed focus and global context. As well as supporting zooming, the technique is readily applied to semantic information filtering, in which only the pertinent information subtypes within a filtered region are shown. We describe our implementations, preliminary user feedback and future directions for this research.


international conference on computer graphics and interactive techniques | 2005

Designing augmented reality interfaces

Mark Billinghurst; Raphael Grasset; Julian Looser

Most interactive computer graphics appear on a screen separate from the real world and the users surroundings. However this does not always have to be the case. In augmented reality (AR) interfaces, three-dimensional virtual images appear superimposed over real objects. AR applications typically use head-mounted or handheld displays to make computer graphics appear in the users environment.


international symposium on mixed and augmented reality | 2008

ComposAR: An intuitive tool for authoring AR applications

Hartmut Seichter; Julian Looser; Mark Billinghurst

This paper introduces ComposAR, a tool to allow a wide audience to author AR and MR applications. It is unique in that it supports both visual programming and interpretive scripting, and an immediate mode for runtime testing. ComposAR is written in Python which means the user interface and runtime behavior can be easily customized and third-party modules can be incorporated into the authoring environment. We describe the design philosophy and the resulting user interface, lessons learned and directions for future research.


international conference on computer graphics and interactive techniques | 2007

An evaluation of virtual lenses for object selection in augmented reality

Julian Looser; Mark Billinghurst; Raphael Grasset; Andy Cockburn

This paper reports the results of an experiment to compare three different selection techniques in a tabletop tangible augmented reality interface. Object selection is an important task in all direct manipulation interfaces because it precedes most other manipulation and navigation actions. Previous work on tangible virtual lenses for visualisation has prompted the exploration of how selection techniques can be incorporated into these tools. In this paper a selection technique based on virtual lenses is compared with the traditional approaches of virtual hand and virtual pointer methods. The Lens technique is found to be faster, require less physical effort to use, and is preferred by participants over the other techniques. These results can be useful in guiding the development of future augmented reality interfaces.


international symposium on mixed and augmented reality | 2007

A 3D Flexible and Tangible Magic Lens in Augmented Reality

Julian Looser; Raphael Grasset; Mark Billinghurst

The Magic Lens concept is a focus and context technique which facilitates the visualization of complex and dense data. In this paper, we propose a new type of 3D tangible Magic Lens in the form of a flexible sheet. We describe new interaction techniques associated with this tool, and demonstrate how it can be applied in different AR applications.


international symposium on mixed and augmented reality | 2006

Transitional interface: concept, issues and framework

Raphael Grasset; Julian Looser; Mark Billinghurst

Transitional Interfaces have emerged as a new way to interact and collaborate between different interactive spaces such as reality, virtual reality and augmented reality environments. In this paper we explore this concept further. We introduce a descriptive model of the concept, its collaborative aspect and how it can be generalized to describe natural and continuous transitions between contexts (e.g. across space, scale, viewpoints, and representation).


international conference on 3d web technology | 2011

Cross-media agent platform

Radoslaw Niewiadomski; Mohammad Obaid; Elisabetta Bevacqua; Julian Looser; Le Quoc Anh; Catherine Pelachaud

We have developed a general purpose use and modular architecture of an embodied conversational agent (ECA). Our agent is able to communicate using verbal and nonverbal channels like gaze, facial expressions, and gestures. Our architecture follows the SAIBA framework that sets 3-step process and communication protocols. In our implementation of SAIBA architecture we focus on flexibility and we introduce different levels of the customization. In particular, our system is able to display the same communicative intention with different embodiments, be a virtual agent or a robot. Moreover our framework is independent of the animation player technology. Agent animations can be displayed across different medias, such as web browser, virtual or augmented reality. In this paper we present our agent architecture and its main features.


international symposium on mixed and augmented reality | 2009

Multitouch interaction for Tangible User Interfaces

Hartmut Seichter; Raphael Grasset; Julian Looser; Mark Billinghurst

We introduce a novel touch-based interaction technique for Tangible User Interfaces (TUIs) in Augmented Reality (AR) applications. The technique allows for direct access and manipulation of virtual content on a registered tracking target, is robust and lightweight, and can be applied in numerous tracking and interaction scenarios.


international symposium on ubiquitous virtual reality | 2008

A Sensor-Based Interaction for Ubiquitous Virtual Reality Systems

Dongpyo Hong; Julian Looser; Hartmut Seichter; Mark Billinghurst; Woontack Woo

In this paper, we propose a sensor-based interaction for ubiquitous virtual reality (U-VR) systems that users are able to interact implicitly or explicitly with through a sensor. Due to the advances in sensor technology, we can utilize sensory data as a means of user interactions. To show the feasibility of the proposed method, we extend the Composar augmented reality (AR) authoring tool to add support for sensor-based interaction. In this way the user can write simple scripts to rapidly prototype interaction with virtual 3D contents through a sensor. We believe that the proposed method provides natural user interactions for U-VR systems.

Collaboration


Dive into the Julian Looser's collaboration.

Top Co-Authors

Avatar

Mark Billinghurst

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hartmut Seichter

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Andy Cockburn

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Marcia Lyons

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudia Nelles

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Eric Woods

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua Savage

University of Canterbury

View shared research outputs
Researchain Logo
Decentralizing Knowledge