Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Bragdon is active.

Publication


Featured researches published by Andrew Bragdon.


human factors in computing systems | 2010

Code bubbles: a working set-based interface for code understanding and maintenance

Andrew Bragdon; Robert C. Zeleznik; Steven P. Reiss; Suman Karumuri; William Cheung; Joshua Kaplan; Christopher Coleman; Ferdi Adeputra; Joseph J. LaViola

Developers spend significant time reading and navigating code fragments spread across multiple locations. The file-based nature of contemporary IDEs makes it prohibitively difficult to create and maintain a simultaneous view of such fragments. We propose a novel user interface metaphor for code understanding based on collections of lightweight, editable fragments called bubbles, which form concurrently visible working sets. We present the results of a qualitative usability evaluation, and the results of a quantitative study which indicates Code Bubbles significantly improved code understanding time, while reducing navigation interactions over a widely-used IDE, for two controlled tasks.


international conference on software engineering | 2010

Code bubbles: rethinking the user interface paradigm of integrated development environments

Andrew Bragdon; Steven P. Reiss; Robert C. Zeleznik; Suman Karumuri; William Cheung; Joshua Kaplan; Christopher Coleman; Ferdi Adeputra; Joseph J. LaViola

Todays integrated development environments (IDEs) are hampered by their dependence on files and file-based editing. We propose a novel user interface that is based on collections of lightweight editable fragments, called bubbles, which when grouped together form concurrently visible working sets. In this paper we describe the design of a prototype IDE user interface for Java based on working sets. A quantitative evaluation shows that developers could expect to view a sizeable number of functions concurrently with relatively few UI operations. A qualitative user evaluation with 23 professional developers indicates a high level of excitement, interest, and potential benefits and uses.


interactive tabletops and surfaces | 2011

Code space: touch + air gesture hybrid interactions for supporting developer meetings

Andrew Bragdon; Robert DeLine; Ken Hinckley; Meredith Ringel Morris

We present Code Space, a system that contributes touch + air gesture hybrid interactions to support co-located, small group developer meetings by democratizing access, control, and sharing of information across multiple personal devices and public displays. Our system uses a combination of a shared multi-touch screen, mobile touch devices, and Microsoft Kinect sensors. We describe cross-device interactions, which use a combination of in-air pointing for social disclosure of commands, targeting and mode setting, combined with touch for command execution and precise gestures. In a formative study, professional developers were positive about the interaction design, and most felt that pointing with hands or devices and forming hand postures are socially acceptable. Users also felt that the techniques adequately disclosed who was interacting and that existing social protocols would help to dictate most permissions, but also felt that our lightweight permission feature helped presenters manage incoming content.


user interface software and technology | 2010

Hands-on math: a page-based multi-touch and pen desktop for technical work and problem solving

Robert C. Zeleznik; Andrew Bragdon; Ferdi Adeputra; Hsu-Sheng Ko

Students, scientists and engineers have to choose between the flexible, free-form input of pencil and paper and the computational power of Computer Algebra Systems (CAS) when solving mathematical problems. Hands-On Math is a multi-touch and pen-based system which attempts to unify these approaches by providing virtual paper that is enhanced to recognize mathematical notations as a means of providing in situ access to CAS functionality. Pages can be created and organized on a large pannable desktop, and mathematical expressions can be computed, graphed and manipulated using a set of uni- and bi-manual interactions which facilitate rapid exploration by eliminating tedious and error prone transcription tasks. Analysis of a qualitative pilot evaluation indicates the potential of our approach and highlights usability issues with the novel techniques used.


human factors in computing systems | 2009

GestureBar: improving the approachability of gesture-based interfaces

Andrew Bragdon; Robert C. Zeleznik; Brian Williamson; Tim Miller; Joseph J. LaViola

GestureBar is a novel, approachable UI for learning gestural interactions that enables a walk-up-and-use experience which is in the same class as standard menu and toolbar interfaces. GestureBar leverages the familiar, clean look of a common toolbar, but in place of executing commands, richly discloses how to execute commands with gestures, through animated images, detail tips and an out-of-document practice area. GestureBars simple design is also general enough for use with any recognition technique and for integration with standard, non-gestural UI components. We evaluate GestureBar in a formal experiment showing that users can perform complex, ecologically valid tasks in a purely gestural system without training, introduction, or prior gesture experience when using GestureBar, discovering and learning a high percentage of the gestures needed to perform the tasks optimally, and significantly outperforming a state of the art crib sheet. The relative contribution of the major design elements of GestureBar is also explored. A second experiment shows that GestureBar is preferred to a basic crib sheet and two enhanced crib sheet variations.


international conference on software engineering | 2012

Debugger canvas: industrial experience with the code bubbles paradigm

Robert DeLine; Andrew Bragdon; Kael Rowan; Jens Jacobsen; Steven P. Reiss

At ICSE 2010, the Code Bubbles team from Brown University and the Code Canvas team from Microsoft Research presented similar ideas for new user experiences for an integrated development environment. Since then, the two teams formed a collaboration, along with the Microsoft Visual Studio team, to release Debugger Canvas, an industrial version of the Code Bubbles paradigm. With Debugger Canvas, a programmer debugs her code as a collection of code bubbles, annotated with call paths and variable values, on a two-dimensional pan-and-zoom surface. In this experience report, we describe new user interface ideas, describe the rationale behind our design choices, evaluate the performance overhead of the new design, and provide user feedback based on lab participants, post-release usage data, and a user survey and interviews. We conclude that the code bubbles paradigm does scale to existing customer code bases, is best implemented as a mode in the existing user experience rather than a replacement, and is most useful when the user has a long or complex call paths, a large or unfamiliar code base, or complex control patterns, like factories or dynamic linking.


interactive tabletops and surfaces | 2010

Gesture play: motivating online gesture learning with fun, positive reinforcement and physical metaphors

Andrew Bragdon; Arman Uguray; Daniel Wigdor; Stylianos Anagnostopoulos; Robert C. Zeleznik; Rutledge Feman

Learning a set of gestures requires a non-trivial investment of time from novice users. We propose a novel approach based on positive reinforcement for motivating the online learning of multi-touch gestures: introducing simple, game-like elements to make gesture learning fun and enjoyable. We develop 3 metaphors, button widgets, animated spring widgets, and physical props, as primitives for simple, physically-based puzzles which afford the disclosure of static and dynamic hand gestures. Using these metaphors, we implemented a gesture set representing 14 of 16 gesture types in an established hand gesture taxonomy. We present the results of a quantitative and qualitative evaluation which indicate this approach motivates gesture rehearsal more so than video demonstrations, while memory recall was equivalent overall but improved in the short-term, for controlled tasks.


human factors in computing systems | 2011

Gesture select:: acquiring remote targets on large displays without pointing

Andrew Bragdon; Hsu-Sheng Ko

When working at a large wall display, even if partially utilized, many targets are likely to be distant from the user, requiring walking, which is slow, and interrupts workflow. We propose a novel technique for selecting remote targets called Gesture Select, in which users draw an initial mark, in a targets direction; rectilinear gestures represented as icons are dynamically overlaid on targets within a region of interest; the user then continues by drawing the continuation mark corresponding to the target, to select it. Extensions to this technique to support working with remote content for an extended period, and learning gesture shortcuts are presented. A formal experiment indicates Gesture Select significantly outperformed direct selection for mid/far targets. Further analysis suggests Gesture Select performance is principally affected by the extent to which users can read the gestures, influenced by distance and perspective warping, and the gesture complexity in the ROI. The results of a second 2-D experiment with labeled targets indicate Gesture Select significantly outperformed direct selection and an existing technique.


symposium on 3d user interfaces | 2008

Poster: Effects of Head Tracking and Stereo on Non-Isomorphic 3D Rotation

Joseph J. LaViola; Andrew S. Forsberg; John Huffman; Andrew Bragdon

We present an experimental study that explores how head tracking and stereo affect user performance when rotating 3D virtual objects using isomorphic and non-isomorphic rotation techniques. Our experiment compares isomorphic with non-isomorphic rotation utilizing four different display modes (no head tracking/no stereo, head tracking/no stereo, no head tracking/stereo, and head tracking/stereo) and two different angular error thresholds for task completion. Our results indicate that rotation error is significantly reduced when subjects perform the task using non-isomorphic 3D rotation with head tracking/stereo than with no head tracking/no stereo. In addition, subjects peformed the rotation task with significantly less error with head tracking/stereo and no head tracking/stereo than with no head tracking/no stereo, regardless of rotation technique. Subjects also highly rated the importance of stereo and non-isomorphic amplification in the 3D rotation task.


sketch based interfaces and modeling | 2008

An empirical study in pen-centric user interfaces: diagramming

Andrew S. Forsberg; Andrew Bragdon; Joseph J. LaViola; Sashi Raghupathy; Robert C. Zeleznik

We present a user study aimed at helping understand the applicability of pen-computing in desktop environments. The study applied three mouse-and-keyboard-based and three pen-based interaction techniques to six variations of a diagramming task. We ran 18 subjects from a general population and the key finding was that while the mouse and keyboard techniques generally were comparable or faster than the pen techniques, subjects ranked pen techniques higher and enjoyed them more. Our contribution is the results from a formal user study that suggests there is a broader applicability and subjective preference for pen user interfaces than the niche PDA and mobile market they currently serve.

Collaboration


Dive into the Andrew Bragdon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph J. LaViola

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge