Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin Swindells is active.

Publication


Featured researches published by Colin Swindells.


IEEE Transactions on Visualization and Computer Graphics | 2010

eSeeTrack—Visualizing Sequential Fixation Patterns

Hoi Ying Tsang; Melanie Tory; Colin Swindells

We introduce eSeeTrack, an eye-tracking visualization prototype that facilitates exploration and comparison of sequential gaze orderings in a static or a dynamic scene. It extends current eye-tracking data visualizations by extracting patterns of sequential gaze orderings, displaying these patterns in a way that does not depend on the number of fixations on a scene, and enabling users to compare patterns from two or more sets of eye-gaze data. Extracting such patterns was very difficult with previous visualization techniques. eSeeTrack combines a timeline and a tree-structured visual representation to embody three aspects of eye-tracking data that users are interested in: duration, frequency and orderings of fixations. We demonstrate the usefulness of eSeeTrack via two case studies on surgical simulation and retail store chain data. We found that eSeeTrack allows ordering of fixations to be rapidly queried, explored and compared. Furthermore, our tool provides an effective and efficient mechanism to determine pattern outliers. This approach can be effective for behavior analysis in a variety of domains that are described at the end of this paper.


human factors in computing systems | 2007

Exploring affective design for physical controls

Colin Swindells; Karon E. MacLean; Kellogg S. Booth; Michael J. Meitner

Physical controls such as knobs, sliders, and buttons are experiencing a revival as many computing systems progress from personal computing architectures towards ubiquitous computing architectures. We demonstrate a process for measuring and comparing visceral emotional responses of a physical control to performance results of a target acquisition task. In our user study, participants experienced mechanical and rendered friction, inertia, and detent dynamics as they turned a haptic knob towards graphical targets of two different widths and amplitudes. Together, this process and user study provide novel affect- and performance-based design guidance to developers of physical controls for emerging ubiquitous computing environments. Our work bridges extensive human factors work in mechanical systems that peaked in the 1960s, to contemporary trends, with a goal of integrating mechatronic controls into emerging ubiquitous computing systems.


international conference on multimodal interfaces | 2003

TorqueBAR: an ungrounded haptic feedback device

Colin Swindells; Alex Unden; Tao Sang

Kinesthetic feedback is a key mechanism by which people perceive object properties during their daily tasks - particularly inertial properties. For example, transporting a glass of water without spilling, or dynamically positioning a handheld tool such as a hammer, both require inertial kinesthetic feedback. We describe a prototype for a novel ungrounded haptic feedback device, the TorqueBAR, that exploits a kinesthetic awareness of dynamic inertia to simulate complex coupled motion as both a display and input device. As a user tilts the TorqueBAR to sense and control computer programmed stimuli, the TorqueBARs centre-of-mass changes in real-time according to the users actions. We evaluate the TorqueBAR using both quantitative and qualitative techniques, and we describe possible applications for the device such as video games and real-time robot navigation.


user interface software and technology | 2000

System lag tests for augmented and virtual environments

Colin Swindells; John Dill; Kellogg S. Booth

We describe a simple technique for accurately calibrating the temporal lag in augmented and virtual environments within the Enhanced Virtual Hand Lab (EVHL), a collection of hardware and software to support research on goal-directed human hand motion. Lag is the sum of various delays in the data pipeline associated with sensing, processing, and displaying information from the physical world to produce an augmented or virtual world. Our main calibration technique uses a modified phonograph turntable to provide easily tracked periodic motion, reminiscent of the pendulum-based calibration technique of Liang, Shaw and Green. Measurements show a three-frame (50 ms) lag for the EVHL. A second technique, which uses a specialized analog sensor that is part of the EVHL, provides a “closed loop” calibration capable of sub-frame accuracy. Knowing the lag to sub-frame accuracy enables a predictive tracking scheme to compensate for the end-toend lag in the data pipeline. We describe both techniques and the EVHL environment in which they are used.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2006

The Role of Prototyping Tools for Haptic Behavior Design

Colin Swindells; Evgeny Maksakov; Karon E. MacLean; Victor Chung

We describe key affordances required by tools for developing haptic behaviors. Haptic icon design involves the envisioning, expression and iterative modification of haptic behavior representations. These behaviors are then rendered on a haptic device. For example, a sinusoidal force vs. position representation rendered on a haptic knob would produce the feeling of detents. Our contribution is twofold. We introduce a custom haptic icon prototyper that includes novel interaction features, and we then use the lessons learnt from its development plus our experiences with a variety of haptic devices to present and argue high-level design choices for such prototyping tools in general.


computer graphics international | 2004

Comparing CAVE, wall, and desktop displays for navigation and wayfinding in complex 3D models

Colin Swindells; Barry A. Po; Ima Hajshirmohammadi; Brian Corrie; John Dill; Brian D. Fisher; Kellogg S. Booth

Computer-aided design (CAD) and 3D visualization techniques are at the heart of many engineering processes such as aircraft, ship, and automobile design. These visualization tasks require users to navigate or wayfind through complex 3D geometric models consisting of millions of parts. Despite numerous studies, it remains unclear whether large-screen displays improve user performance for such activities. We present a user study comparing standard desktop, immersive room (i.e., CAVE), and wall displays with 3D stereo/head-tracking, and mono/no head-tracking. We observed individual differences between users and found that the presence of contextual structure greatly impacted performance, suggesting that providing structure and developing interaction techniques accommodating a wide range of users yields better performance than focusing on display characteristics alone


human factors in computing systems | 2008

Implicit user-adaptive system engagement in speech and pen interfaces

Sharon Oviatt; Colin Swindells; Alexander M. Arthur

As emphasis is placed on developing mobile, educational, and other applications that minimize cognitive load on users, it is becoming more essential to explore interfaces based on implicit engagement techniques so users can remain focused on their tasks. In this research, data were collected with 12 pairs of students who solved complex math problems using a tutorial system that they engaged over 100 times per session entirely implicitly via speech amplitude or pen pressure cues. Results revealed that users spontaneously, reliably, and substantially adapted these forms of communicative energy to designate and repair an intended interlocutor in a computer-mediated group setting. Furthermore, this behavior was harnessed to achieve system engagement accuracies of 75-86%, with accuracies highest using speech amplitude. However, students had limited awareness of their own adaptations. Finally, while continually using these implicit engagement techniques, students maintained their performance level at solving complex mathematics problems throughout a one-hour session.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2007

Capturing the Dynamics of Mechanical Knobs

Colin Swindells; Karon E. MacLean

We present a novel experimental apparatus for the capture and replay of physical controls (mechanical knobs), as well as a set of acquired models and a design discussion related to the characterization approach taken here. Our work extends existing research by addressing problems surrounding identification of physical controls, including sensor gripping techniques for arbitrary target devices; and improved hardware and algorithm combinations for finer capture resolutions. Models were acquired from 5 real knobs, based on 2nd order model fits to torque and kinematic results of swept-sine excitations


Advances in Human-computer Interaction | 2012

Comparing horizontal and vertical surfaces for a collaborative design task

Brianna Potvin; Colin Swindells; Melanie Tory; Margaret-Anne D. Storey

We investigate the use of different surface orientations for collaborative design tasks. Specifically, we compare horizontal and vertical surface orientations used by dyads performing a collaborative design task while standing. We investigate how the display orientation influences group participation including face-to-face contact, total discussion, and equality of physical and verbal participation among participants. Our results suggest that vertical displays better support face-to-face contact whereas side-byside arrangements encourage more discussion. However, display orientation has little impact on equality of verbal and physical participation, and users do not consistently prefer one orientation over the other. Based on our findings, we suggest that further investigation into the differences between horizontal and vertical orientations is warranted.


ieee vgtc conference on visualization | 2009

Comparing parameter manipulation with mouse, pen, and slider user interfaces

Colin Swindells; Melanie Tory; Rebecca Dreezer

Visual fixation on ones tool(s) takes much attention away from ones primary task. Following the belief that the best tools ‘disappear’ and become invisible to the user, we present a study comparing visual fixations (eye gaze within locations on a graphical display) and performance for mouse, pen, and physical slider user interfaces. Participants conducted a controlled, yet representative, color matching task that required user interaction representative of many data exploration tasks such as parameter exploration of medical or fuel cell data. We demonstrate that users may spend up to 95% fewer visual fixations on physical sliders versus standard mouse and pen tools without any loss in performance for a generalized visual performance task.

Collaboration


Dive into the Colin Swindells's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karon E. MacLean

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Kellogg S. Booth

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

John Dill

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Bin Zheng

University of Alberta

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Meitner

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge