Maxime Cordeil
Monash University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maxime Cordeil.
IEEE Transactions on Visualization and Computer Graphics | 2017
Maxime Cordeil; Tim Dwyer; Karsten Klein; Bireswar Laha; Kim Marriott; Bruce H. Thomas
High-quality immersive display technologies are becoming mainstream with the release of head-mounted displays (HMDs) such as the Oculus Rift. These devices potentially represent an affordable alternative to the more traditional, centralised CAVE-style immersive environments. One driver for the development of CAVE-style immersive environments has been collaborative sense-making. Despite this, there has been little research on the effectiveness of collaborative visualisation in CAVE-style facilities, especially with respect to abstract data visualisation tasks. Indeed, very few studies have focused on the use of these displays to explore and analyse abstract data such as networks and there have been no formal user studies investigating collaborative visualisation of abstract data in immersive environments. In this paper we present the results of the first such study. It explores the relative merits of HMD and CAVE-style immersive environments for collaborative analysis of network connectivity, a common and important task involving abstract data. We find significant differences between the two conditions in task completion time and the physical movements of the participants within the space: participants using the HMD were faster while the CAVE2 condition introduced an asymmetry in movement between collaborators. Otherwise, affordances for collaborative data analysis offered by the low-cost HMD condition were not found to be different for accuracy and communication with the CAVE2. These results are notable, given that the latest HMDs will soon be accessible (in terms of cost and potentially ubiquity) to a massive audience.
IEEE Transactions on Visualization and Computer Graphics | 2018
Benjamin Bach; Ronell Sicat; Johanna Beyer; Maxime Cordeil; Hanspeter Pfister
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a users real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
user interface software and technology | 2017
Maxime Cordeil; Andrew Cunningham; Tim Dwyer; Bruce H. Thomas; Kim Marriott
We introduce ImAxes immersive system for exploring multivariate data using fluid, modeless interaction. The basic interface element is an embodied data axis. The user can manipulate these axes like physical objects in the immersive environment and combine them into sophisticated visualisations. The type of visualisation that appears depends on the proximity and relative orientation of the axes with respect to one another, which we describe with a formal grammar. This straight-forward composability leads to a number of emergent visualisations and interactions, which we review, and then demonstrate with a detailed multivariate data analysis use case.
international conference on human-computer interaction | 2013
Cheryl Savery; Christophe Hurter; Rémi Lesbordes; Maxime Cordeil; T. C. Nicholas Graham
For expert interfaces, it is not obvious whether providing multiple modes of interaction, each tuned to different sub-tasks, leads to a better user experience than providing a more limited set. In this paper, we investigate this question in the context of air traffic control. We present and analyze an augmented flight strip board offering several forms of interaction, including touch, digital pen and physical paper objects. We explore the technical challenges of adding finger detection to such a flight strip board and evaluate how expert air traffic controllers interact with the resulting system. We find that users are able to quickly adapt to the wide range of offered modalities. Users were not overburden by the choice of different modalities, and did not find it difficult to determine the appropriate modality to use for each interaction.
ieee pacific visualization symposium | 2017
Maxime Cordeil; Benjamin Bach; Yongchao Li; Elliott Wilson; Tim Dwyer
We introduce the concept of “spatio-data coordination” (SD coordination) which defines the mapping of user actions in physical space into the space of data in a visualisation. SD coordination is intended to lower the users cognitive load when exploring complex multi-dimensional data such as biomedical data, multiple data attributes vs time in a space-time-cube visualisation, or three-dimensional projections of three-or-higher-dimensional data sets. To inform the design of interaction devices to allow for SD coordination, we define a design space and demonstrate it with sketches and early prototypes of three exemplar devices for SD coordinated interaction.
electronic imaging | 2017
Björn Sommer; David G. Barnes; Sarah E. Boyd; Tom Chandler; Maxime Cordeil; Tobias Czauderna; Mathias Klapperstück; Karsten Klein; Toan Nguyen; Falk Schreiber
Immersive Analytics investigates how novel interaction and display technologies may support analytical reasoning and decision making. The Immersive Analytics initiative of Monash University started early 2014. Over the last few years, a number of projects have been developed or extended in this context to meet the requirements of semi- or full-immersive stereoscopic environments. Different technologies are used for this purpose: CAVE2™ (a 330 degree large-scale visualization environment which can be used for educative and scientific group presentations, analyses and discussions), stereoscopic Powerwalls (miniCAVEs, representing a segment of the CAVE2 and used for development and communication), Fishtanks, and/or HMDs (such as Oculus, VIVE, and mobile HMD approaches). Apart from CAVE2™ all systems are or will be employed on both the Monash University and the University of Konstanz side, especially to investigate collaborative Immersive Analytics. In addition, sensiLab extends most of the previous approaches by involving all senses, 3D visualization is combined with multi-sensory feedback, 3D printing, robotics in a scientific-artistic-creative environment.
Proceedings of the 2016 ACM Companion on Interactive Surfaces and Spaces | 2016
Maxime Cordeil; Tim Dwyer; Christophe Hurter
In this paper we review the activities of Air Traffic Control and Management (ATC/M) and expose scenarios that illustrate current and future challenges in this domain. In particular we look at those challenges that can be tackled with the use of immersion. We introduce the concepts of an immersive Remote Tower and Collaborative Immersive Trajectory analysis. These make use of immersive technologies such as Head Mounted Displays (HMDs) or large, tiled displays to immerse users in their tasks, better supporting the management and analysis of the complex data produced in this domain.
Computer Graphics Forum | 2018
Yalong Yang; Bernhard Jenny; Tim Dwyer; Kim Marriott; Haohui Chen; Maxime Cordeil
This paper explores different ways to render world‐wide geographic maps in virtual reality (VR). We compare: (a) a 3D exocentric globe, where the users viewpoint is outside the globe; (b) a flat map (rendered to a plane in VR); (c) an egocentric 3D globe, with the viewpoint inside the globe; and (d) a curved map, created by projecting the map onto a section of a sphere which curves around the user. In all four visualisations the geographic centre can be smoothly adjusted with a standard handheld VR controller and the user, through a head‐tracked headset, can physically move around the visualisation. For distance comparison exocentric globe is more accurate than egocentric globe and flat map. For area comparison more time is required with exocentric and egocentric globes than with flat and curved maps. For direction estimation, the exocentric globe is more accurate and faster than the other visual presentations. Our study participants had a weak preference for the exocentric globe. Generally the curved map had benefits over the flat map. In almost all cases the egocentric globe was found to be the least effective visualisation. Overall, our results provide support for the use of exocentric globes for geographic visualisation in mixed‐reality.
international conference on human computer interaction | 2011
Maxime Cordeil; Christophe Hurter; Stéphane Conversy
Exploring a multidimensional dataset with visualization requires to transition between points of view. In order to enable users to understand transitions, visualization can employ progressive 3D rotations. However, existing implementations of progressive 3D rotation exhibit some perception problems with visualization of cluttered scene. In this paper, we present a first experiment showing how existing 3D rotation is effective for tracking marks, and that cluttered scenes actually hinder perception of rotation. Then, we propose to set the axis of rotation on the graphical marks of interest, and ran a second experiment showing that focus-centered rotation improves perception of relative arrangement.
IEEE Transactions on Visualization and Computer Graphics | 2018
Christophe Hurter; Nathalie Henry Riche; Steven M. Drucker; Maxime Cordeil; Richard Alligier; Romain Vuillemot
Visualizing 3D trajectories to extract insights about their similarities and spatial configuration is a critical task in several domains. Air traffic controllers for example deal with large quantities of aircrafts routes to optimize safety in airspace and neuroscientists attempt to understand neuronal pathways in the human brain by visualizing bundles of fibers from DTI images. Extracting insights from masses of 3D trajectories is challenging as the multiple three dimensional lines have complex geometries, may overlap, cross or even merge with each other, making it impossible to follow individual ones in dense areas. As trajectories are inherently spatial and three dimensional, we propose FiberClay: a system to display and interact with 3D trajectories in immersive environments. FiberClay renders a large quantity of trajectories in real time using GP-GPU techniques. FiberClay also introduces a new set of interactive techniques for composing complex queries in 3D space leveraging immersive environment controllers and user position. These techniques enable an analyst to select and compare sets of trajectories with specific geometries and data properties. We conclude by discussing insights found using FiberClay with domain experts in air traffic control and neurology.