Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barrett Ens is active.

Publication


Featured researches published by Barrett Ens.


human factors in computing systems | 2014

Exploring the use of hand-to-face input for interacting with head-worn displays

Marcos Serrano; Barrett Ens; Pourang Irani

We propose the use of Hand-to-Face input, a method to interact with head-worn displays (HWDs) that involves contact with the face. We explore Hand-to-Face interaction to find suitable techniques for common mobile tasks. We evaluate this form of interaction with document navigation tasks and examine its social acceptability. In a first study, users identify the cheek and forehead as predominant areas for interaction and agree on gestures for tasks involving continuous input, such as document navigation. These results guide the design of several Hand-to-Face navigation techniques and reveal that gestures performed on the cheek are more efficient and less tiring than interactions directly on the HWD. Initial results on the social acceptability of Hand-to-Face input allow us to further refine our design choices, and reveal unforeseen results: some gestures are considered culturally inappropriate and gender plays a role in selection of specific Hand-to-Face interactions. From our overall results, we provide a set of guidelines for developing effective Hand-to-Face interaction techniques.


human factors in computing systems | 2014

The personal cockpit: a spatial interface for effective task switching on head-worn displays

Barrett Ens; Rory Finnegan; Pourang Irani

As wearable computing goes mainstream, we must improve the state of interface design to keep users productive with natural-feeling interactions. We present the Personal Cockpit, a solution for mobile multitasking on head-worn displays. We appropriate empty space around the user to situate virtual windows for use with direct input. Through a design-space exploration, we run a series of user studies to fine-tune our layout of the Personal Cockpit. In our final evaluation, we compare our design against two baseline interfaces for switching between everyday mobile applications. This comparison highlights the deficiencies of current view-fixed displays, as the Personal Cockpit provides a 40% improvement in application switching time. We demonstrate of several useful implementations and a discussion of important problems for future implementation of our design on current and near-future wearable devices.


human factors in computing systems | 2012

See me, see you: a lightweight method for discriminating user touches on tabletop displays

Hong Zhang; Xing-Dong Yang; Barrett Ens; Hai-Ning Liang; Pierre Boulanger; Pourang Irani

Tabletop systems provide a versatile space for collaboration, yet, in many cases, are limited by the inability to differentiate the interactions of simultaneous users. We present See Me, See You, a lightweight approach for discriminating user touches on a vision-based tabletop. We contribute a valuable characterization of finger orientation distributions of tabletop users. We exploit this biometric trait with a machine learning approach to allow the system to predict the correct position of users as they touch the surface. We achieve accuracies as high as 98% in simple situations and above 92% in more challenging conditions, such as two-handed tasks. We show high acceptance from users, who can self-correct prediction errors without significant costs. See Me, See You is a viable solution for providing simple yet effective support for multi-user application features on tabletops.


symposium on spatial user interaction | 2014

Ethereal planes: a design framework for 2D information space in 3D mixed reality environments

Barrett Ens; Juan David Hincapié-Ramos; Pourang Irani

Information spaces are virtual workspaces that help us manage information by mapping it to the physical environment. This widely influential concept has been interpreted in a variety of forms, often in conjunction with mixed reality. We present Ethereal Planes, a design framework that ties together many existing variations of 2D information spaces. Ethereal Planes is aimed at assisting the design of user interfaces for next-generation technologies such as head-worn displays. From an extensive literature review, we encapsulated the common attributes of existing novel designs in seven design dimensions. Mapping the reviewed designs to the framework dimensions reveals a set of common usage patterns. We discuss how the Ethereal Planes framework can be methodically applied to help inspire new designs. We provide a concrete example of the frameworks utility during the design of the Personal Cockpit, a window management system for head-worn displays.


user interface software and technology | 2015

Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities

Barrett Ens; Tovi Grossman; Fraser Anderson; Justin Matejka; George W. Fitzmaurice

The growth of mobile and wearable technologies has made it often difficult to understand what people in our surroundings are doing with their technology. In this paper, we introduce the concept of candid interaction: techniques for providing awareness about our mobile and wearable device usage to others in the vicinity. We motivate and ground this exploration through a survey on current attitudes toward device usage during interpersonal encounters. We then explore a design space for candid interaction through seven prototypes that leverage a wide range of technological enhancements, such as Augmented Reality, shape memory muscle wire, and wearable projection. Preliminary user feedback of our prototypes highlights the trade-offs between the benefits of sharing device activity and the need to protect user privacy.


symposium on spatial user interaction | 2016

Combining Ring Input with Hand Tracking for Precise, Natural Interaction with Spatial Analytic Interfaces

Barrett Ens; Ahmad Byagowi; Teng Han; Juan David Hincapié-Ramos; Pourang Irani

Current wearable interfaces are designed to support short-duration tasks known as micro-interactions. To support productive interfaces for everyday analytic tasks, designers can leverage natural input methods such as direct manipulation and pointing. Such natural methods are now available in virtual, mobile environments thanks to miniature depth cameras mounted on head-worn displays (HWDs). However, these techniques have drawbacks, such as fatigue and limited precision. To overcome these limitations, we explore combined input: hand tracking data from a head-mounted depth camera, and input from a small ring device. We demonstrate how a variety of input techniques can be implemented using this novel combination of devices. We harness these techniques for use with Spatial Analytic Interfaces: multi-application, spatial UIs for in-situ, analytic taskwork on wearable devices. This research demonstrates how combined input from multiple wearable devices holds promise for supporting high-precision, low-fatigue interaction techniques, to support Spatial Analytic Interfaces on HWDs.


human computer interaction with mobile devices and services | 2011

Characterizing user performance with assisted direct off-screen pointing

Barrett Ens; David Ahlström; Andy Cockburn; Pourang Irani

The limited viewport size of mobile devices requires that users continuously acquire information that lies beyond the edge of the screen. Recent hardware solutions are capable of continually tracking a users finger around the device. This has created new opportunities for interactive solutions, such as direct off-screen pointing: the ability to directly point at objects that are outside the viewport. We empirically characterize user performance with direct off-screen pointing when assisted by target cues. We predict time and accuracy outcomes for direct off-screen pointing with existing and derived models. We validate the models with good results (R2 ≥ 0.9) and reveal that direct off-screen pointing takes up to four times longer than pointing at visible targets, depending on the desired accuracy tradeoff. Pointing accuracy degrades logarithmically with target distance. We discuss design implications in the context of several real-world applications.


symposium on spatial user interaction | 2015

Spatial Constancy of Surface-Embedded Layouts across Multiple Environments

Barrett Ens; Eyal Ofek; Neil D. B. Bruce; Pourang Irani

We introduce a layout manager that exploits the robust sensing capabilities of next-generation head-worn displays by embedding virtual application windows in the users surroundings. With the aim of allowing users to find applications quickly, our approach leverages spatial memory of a known body-centric configuration. The layout manager balances multiple constraints to keep layouts consistent across environments while observing geometric and visual features specific to each locale. We compare various constraint weighting schemas and discuss outcomes of this approach applied to models of two test environments.


international conference on computer graphics and interactive techniques | 2017

Exploring enhancements for remote mixed reality collaboration

Thammathip Piumsomboon; Arindam Day; Barrett Ens; Youngho Lee; Gun A. Lee; Mark Billinghurst

In this paper, we explore techniques for enhancing remote Mixed Reality (MR) collaboration in terms of communication and interaction. We created CoVAR, a MR system for remote collaboration between an Augmented Reality (AR) and Augmented Virtuality (AV) users. Awareness cues and AV-Snap-to-AR interface were proposed for enhancing communication. Collaborative natural interaction, and AV-User-Body-Scaling were implemented for enhancing interaction. We conducted an exploratory study examining the awareness cues and the collaborative gaze, and the results showed the benefits of the proposed techniques for enhancing communication and interaction.


software visualization | 2014

ChronoTwigger: A Visual Analytics Tool for Understanding Source and Test Co-evolution

Barrett Ens; Daniel J. Rea; Roiy Shpaner; Hadi Hemmati; James Everett Young; Pourang Irani

Applying visual analytics to large software systems can help users comprehend the wealth of information produced by source repository mining. One concept of interest is the co-evolution of test code with source code, or how source and test files develop together over time. For example, understanding how the testing pace compares to the development pace can help test managers gauge the effectiveness of their testing strategy. A useful concept that has yet to be effectively incorporated into a co-evolution visualization is co-change. Co-change is a quantity that identifies correlations between software artifacts, and we propose using this to organize our visualization in order to enrich the analysis of co-evolution. In this paper, we create, implement, and study an interactive visual analytics tool that displays source and test file changes over time (co-evolution) while grouping files that change together (co-change). Our new technique improves the analysts ability to infer information about the software development process and its relationship to testing. We discuss the development of our system and the results of a small pilot study with three participants. Our findings show that our visualization can lead to inferences that are not easily made using other techniques alone.

Collaboration


Dive into the Barrett Ens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Billinghurst

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gun A. Lee

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hui Shyong Yeo

University of St Andrews

View shared research outputs
Researchain Logo
Decentralizing Knowledge