Matthew D Plumlee
University of New Hampshire
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew D Plumlee.
ACM Transactions on Computer-Human Interaction | 2006
Matthew D Plumlee; Colin Ware
In order to investigate large information spaces effectively, it is often necessary to employ navigation mechanisms that allow users to view information at different scales. Some tasks require frequent movements and scale changes to search for details and compare them. We present a model that makes predictions about user performance on such comparison tasks with different interface options. A critical factor embodied in this model is the limited capacity of visual working memory, allowing for the cost of visits via fixating eye movements to be compared to the cost of visits that require user interaction with the mouse. This model is tested with an experiment that compares a zooming user interface with a multi-window interface for a multiscale pattern matching task. The results closely matched predictions in task performance times; however error rates were much higher with zooming than with multiple windows. We hypothesized that subjects made more visits in the multi-window condition, and ran a second experiment using an eye tracker to record the pattern of fixations. This revealed that subjects made far more visits back and forth between pattern locations when able to use eye movements than they made with the zooming interface. The results suggest that only a single graphical object was held in visual working memory for comparisons mediated by eye movements, reducing errors by reducing the load on visual working memory. Finally we propose a design heuristic: extra windows are needed when visual comparisons must be made involving patterns of a greater complexity than can be held in visual working memory.
IEEE Computer Graphics and Applications | 2006
Colin Ware; Roland J. Arsenault; Matthew D Plumlee; David N. Wiley
A new collaboration between visualization experts, engineers, and marine biologists has changed. For the first time, we can see and study the foraging behavior of humpback whales. Our studys primary objective was furthering the science of marine mammal ethology. We also had a second objective: field testing GeoZui4D, an innovative test-bench for investigate effective ways of navigating through time-varying geospatial data
oceans conference | 2001
Colin Ware; Matthew D Plumlee; Roland J. Arsenault; Larry A. Mayer; Shep Smith
GeoZui3D stands for geographic zooming user interface. It is a new visualization software system designed for interpreting multiple sources of 3D data. The system supports gridded terrain models, triangular meshes, curtain plots, and a number of other display objects. A novel center of workspace interaction method unifies a number of aspects of the interface. It creates a simple viewpoint control method, it helps link multiple views, and is ideal for stereoscopic viewing. GeoZui3D has a number of features to support real-time input. Through a CORBA interface external entities can influence the position and state of objects in the display. Extra windows can be attached to moving objects allowing for their position and data to be monitored. We describe the application of this system for heterogeneous data fusion, for multibeam QC and for ROV/AUV monitoring.
Proceedings International Conference on Coordinated and Multiple Views in Exploratory Visualization - CMV 2003 - | 2003
Matthew D Plumlee; Colin Ware
Frame-of-reference interaction consists of a unified set of 3D interaction techniques for exploratory navigation of large virtual spaces in nonimmersive environments. It is based on a conceptual framework that considers navigation from a cognitive perspective, as a way of facilitating changes in user attention from one reference frame to another, rather than from the mechanical perspective of moving a camera between different points of interest. All of our techniques link multiple frames of reference in some meaningful way. Some techniques link multiple windows within a zooming environment while others allow seamless changes of user focus between static objects, moving objects, and groups of moving objects. We present our techniques as they are implemented in GeoZui3D, a geographic visualization system for ocean data.
interactive 3d graphics and games | 2003
Matthew D Plumlee; Colin Ware
It is common for 3D visualization systems to provide multiple points of view to a user, but there have been many solutions to the problem of linking these views so that users can understand the spatial relationships between them. Toward developing guidelines for view-linking devices, we have carried out two experiments that compare the utility of three different classes of linking devices: a directional proxy, tethers from one view to another, and a track-up map coupling. The task we apply them to is what we call the multi-perspective identification task: subjects are asked to identify an item seen in a local, forward-looking view in the context of a global, overhead view. Our results indicate that the directional proxy is the most beneficial device, and that the track-up map coupling is also beneficial. The results suggest that tethers provide little benefit. The results also suggest that when multiple local views are present, it may be beneficial to emphasize one window as being of primary interest.
Information Visualization | 2013
Colin Ware; Matthew D Plumlee
Weather maps commonly display several variables at once, usually a subset of the following: atmospheric pressure, surface wind speed and direction, surface temperature, cloud cover, and precipitation. Most often, a single variable is mapped separately and occasionally two are shown together. But sometimes there is an attempt to show three or four variables with a result that is difficult to interpret because of visual interference between the graphical elements. As a design exercise, we set the goal of finding out if it is possible to show three variables (two 2D scalar fields and one 2D vector field) simultaneously so that values can be accurately read using keys for all variables, a reasonable level of detail is shown, and important meteorological features stand out clearly. Our solution involves employing three perceptual “channels”: a color channel, a texture channel, and a motion channel in order to perceptually separate the variables and make them independently readable. We describe a set of interactive weather displays, which enable users to view two meteorological scalar fields of various kinds and a field showing wind patterns. To evaluate the method, we implemented three alternative representations each simultaneously showing temperature, atmospheric pressure, wind speed, and direction. Both animated and static variants of our new design were compared to a conventional solution and a glyph-based solution. The evaluation tested the abilities of participants both to read values using a key and to see meteorological patterns in the data. Our new scheme was superior, especially in the representation of wind patterns using the motion channel. It also performed well enough in the representation of pressure using the texture channel to suggest it as a viable design alternative.
advanced visual interfaces | 2002
Matthew D Plumlee; Colin Ware
Zooming and multiple windows are two techniques designed to address the focus-in-context problem. We present a theoretical model of performance that models the relative benefits of these techniques when used by humans for completing a task involving comparisons between widely separated groups of objects. The crux of the model is its cognitive component: the strength of multiple windows comes in the way they aid visual working memory. The task to which we apply our model is multiscale comparison, in which a user begins with a known visual pattern and searches for an identical or similar pattern among distracters. The model predicts that zooming should be better for navigating between a few distant locations when demands on visual memory are low, but that multiple windows are more efficient when demands on visual memory are higher, or there are several distant locations that must be investigated. To evaluate our model we conducted an experiment in which users performed a multiscale comparison task using both zooming and multiple-window interfaces. The results confirm the general predictions of our model.
Exploring Geovisualization | 2005
Colin Ware; Matthew D Plumlee
Publisher Summary This chapter describes the structure of space in terms of both perception and action and discusses the implications for 3D GIS interfaces. It presents some of the properties of visual space with reference to perceptual mechanisms and design implications for interactive GIS. It also highlights the cost of acquiring knowledge through navigation. A navigation mechanism should afford rapid and simple navigation in such a way that maximal cognitive resources are retained for decision-making. The navigation mechanism should afford context as well as focal information. Focal information is that which is the immediate focus of attention and hence, most frequently loaded into visual working memory. In addition, the human perception of space is very different; the visual system supports a number of coordinate systems including retina-based and egocentric head/torso-based. Humans are also, to a very limited extent, capable of constructing a view of space that approximates Cartesian space, but individuals vary greatly in their ability to mentally imagine 3D structures in this way.
symposium on haptic interfaces for virtual environment and teleoperator systems | 2002
Rick Komerska; Colin Ware; Matthew D Plumlee
We build upon a new interaction style for 3D interfaces, called the center of workspace interaction. This style of interaction is defined with respect to a central fixed point in 3D space, conceptually within arms length of the user. For demonstration, we show a haptically enabled fish tank VR that utilizes a set of interaction widgets to support rapid navigation within a large virtual space. The fish tank VR refers to the creation of a small but high quality virtual reality that combines a number of technologies, such as head-tracking and stereo glasses, to their mutual advantage.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2006
Briana M. Sullivan; Colin Ware; Matthew D Plumlee
Inexperienced helmsmen often oversteer because of the lag that occurs between changing the rudder angle and a change in the vessels heading. Predictive displays are a common way of mitigating the effect of lag on human control. Accordingly we developed a predictive display to show the position and heading of a vessel a short time in the future. With this display, the helmsmans task becomes that of keeping the predictor on the planned path. In effect, the predictor is steered, not the vessel. Our predictive model was statistical and based on data gathered from a 40 foot survey vessel carrying out a variety of maneuvers while the position, heading, speed through the water and rudder angle were continuously recorded. The advantage of such a predictor is that it can, in principle, be generated automatically, without any need for a model of hull shape or vessel dynamics. We evaluated the predictor by having both experienced and inexperienced helmsmen steer a pre-defined figure-of-eight course. The results showed substantial reduction in cross track error for inexperienced participants to the point that their performance was indistinguishable from those that were more experienced.