Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Miguel A. Nacenta is active.

Publication


Featured researches published by Miguel A. Nacenta.


acm conference on hypertext | 2008

Seeing things in the clouds: the effect of visual features on tag cloud selections

Scott Bateman; Carl Gutwin; Miguel A. Nacenta

Tag clouds are a popular method for visualizing and linking socially-organized information on websites. Tag clouds represent variables of interest (such as popularity) in the visual appearance of the keywords themselves - using text properties such as font size, weight, or colour. Although tag clouds are becoming common, there is still little information about which visual features of tags draw the attention of viewers. As tag clouds attempt to represent a wider range of variables with a wider range of visual properties, it becomes difficult to predict what will appear visually important to a viewer. To investigate this issue, we carried out an exploratory study that asked users to select tags from clouds that manipulated nine visual properties. Our results show that font size and font weight have stronger effects than intensity, number of characters, or tag area; but when several visual properties are manipulated at once, there is no one property that stands out above the others. This study adds to the understanding of how visual properties of text capture the attention of users, indicates general guidelines for designers of tag clouds, and provides a study paradigm and starting points for future studies. In addition, our findings may be applied more generally to the visual presentation of textual hyperlinks as a way to provide more information to web navigators.


human factors in computing systems | 2005

A comparison of techniques for multi-display reaching

Miguel A. Nacenta; Dzmitry Viktorovich Aliakseyeu; Sriram Subramanian; Carl Gutwin

Recent advances in multi-user collaboration have seen a proliferation of interaction techniques for moving digital objects from one device to another. However, little is known about how these techniques work in realistic situations, or how they compare to one another. We conducted a study to compare the efficiency of six techniques for moving objects from a tablet to a tabletop display. We compared the techniques in four different distance ranges and with three movement directions. We found that techniques like the Radar View and Pick-and-Drop, that have a control-to-display ratio of 1, are significantly faster for object movement than techniques that have smaller control-to-display ratios. We also found that using spatial manipulation of objects was faster than pressure-based manipulation.


human factors in computing systems | 2013

Memorability of pre-designed and user-defined gesture sets

Miguel A. Nacenta; Yemliha Kamber; Yizhou Qiang; Per Ola Kristensson

We studied the memorability of free-form gesture sets for invoking actions. We compared three types of gesture sets: user-defined gesture sets, gesture sets designed by the authors, and random gesture sets in three studies with 33 participants in total. We found that user-defined gestures are easier to remember, both immediately after creation and on the next day (up to a 24% difference in recall rate compared to pre-designed gestures). We also discovered that the differences between gesture sets are mostly due to association errors (rather than gesture form errors), that participants prefer user-defined sets, and that they think user-defined gestures take less time to learn. Finally, we contribute a qualitative analysis of the tradeoffs involved in gesture type selection and share our data and a video corpus of 66 gestures for replicability and further analysis.


user interface software and technology | 2007

E-conic: a perspective-aware interface for multi-display environments

Miguel A. Nacenta; Satoshi Sakurai; Tokuo Yamaguchi; Yohei Miki; Yuichi Itoh; Yoshifumi Kitamura; Sriram Subramanian; Carl Gutwin

Multi-display environments compose displays that can be at different locations from and different angles to the user; as a result, it can become very difficult to manage windows, read text, and manipulate objects. We investigate the idea of perspective as a way to solve these problems in multi-display environments. We first identify basic display and control factors that are affected by perspective, such as visibility, fracture, and sharing. We then present the design and implementation of E-conic, a multi-display multi-user environment that uses location data about displays and users to dynamically correct perspective. We carried out a controlled experiment to test the benefits of perspective correction in basic interaction tasks like targeting, steering, aligning, pattern-matching and reading. Our results show that perspective correction significantly and substantially improves user performance in all these tasks.


graphics interface | 2007

The effects of interaction technique on coordination in tabletop groupware

Miguel A. Nacenta; David Pinelle; Dane Stuckel; Carl Gutwin

The interaction techniques that are used in tabletop groupware systems (such as pick-and-drop or pantograph) can affect the way that people collaborate. However, little is known about these effects, making it difficult for designers to choose appropriate techniques when building tabletop groupware. We carried out an exploratory study to determine how several different types of interaction techniques (pantograph, telepointers, radar views, drag-and-drop, and laser beam) affected coordination and awareness in two tabletop tasks (a game and a storyboarding activity). We found that the choice of interaction technique significantly affected coordination measures, performance measures, and preference - but that the effects were different for the two different tasks. Our study shows that the choice of tabletop interaction technique does indeed matter, and provides insight into how tabletop systems can better support group work.


human factors in computing systems | 2006

Perspective cursor: perspective-based interaction for multi-display environments

Miguel A. Nacenta; Samer Sallam; Bernard Champoux; Sriram Subramanian; Carl Gutwin

Multi-display environments and smart meeting rooms are now becoming more common. These environments build a shared display space from variety of devices: tablets, projected surfaces, tabletops, and traditional monitors. Since the different display surfaces are usually not organized in a single plane, traditional schemes for stitching the displays together can cause problems for interaction. However, there is a more natural way to compose display space -- using perspective. In this paper, we develop interaction techniques for multi-display environments that are based on the users perspective on the room. We designed the Perspective Cursor, a mapping of cursor to display space that appears natural and logical from wherever the user is located. We conducted an experiment to compare two perspective-based techniques, the Perspective Cursor and a beam-based technique, with traditional stitched displays. We found that both perspective techniques were significantly faster for targeting tasks than the traditional technique, and that Perspective Cursor was the most preferred method. Our results show that integrating perspective into the design of multi-display environments can substantially improve performance.


Human-Computer Interaction | 2009

There and Back Again: Cross-Display Object Movement in Multi-Display Environments

Miguel A. Nacenta; Carl Gutwin; Dzmitry Viktorovich Aliakseyeu; Sriram Subramanian

ABSTRACT Multi-display environments (MDEs) are now becoming common, and are becoming more complex, with more displays and more types of display in the environment. One crucial requirement specific to MDEs is that users must be able to move objects from one display to another; this cross-display movement is a frequent and fundamental part of interaction in any application that spans two or more display surfaces. Although many cross-display movement techniques exist, the differences between MDEs—the number, location, and mixed orientation of displays, and the characteristics of the task they are being designed for—require that interaction techniques be chosen carefully to match the constraints of the particular environment. As a way to facilitate interaction design in MDEs, we present a taxonomy that classifies cross-display object movement techniques according to three dimensions: the referential domain that determines how displays are selected, the relationship of the input space to the display configuration, and the control paradigm for executing the movement. These dimensions are based on a descriptive model of the task of cross-display object movement. The taxonomy also provides an analysis of current research that designers and researchers can use to understand the differences between categories of interaction techniques.


interactive tabletops and surfaces | 2010

A set of multi-touch graph interaction techniques

Sebastian Schmidt; Miguel A. Nacenta; Raimund Dachselt; M. Sheelagh T. Carpendale

Interactive node-link diagrams are useful for describing and exploring data relationships in many domains such as network analysis and transportation planning. We describe a multi-touch interaction technique set (IT set) that focuses on edge interactions for node-link diagrams. The set includes five techniques (TouchPlucking, TouchPinning, TouchStrumming, TouchBundling and PushLens) and provides the flexibility to combine them in either sequential or simultaneous actions in order to address edge congestion.


advanced visual interfaces | 2012

The cost of display switching: a comparison of mobile, large display and hybrid UI configurations

Umar Rashid; Miguel A. Nacenta; Aaron J. Quigley

Attaching a large external display can help a mobile device user view more content at once. This paper reports on a study investigating how different configurations of input and output across displays affect performance, subjective workload and preferences in map, text and photo search tasks. Experimental results show that a hybrid configuration where visual output is distributed across displays is worst or equivalent to worst in all tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best in text and photo search tasks (tied with a mobile-only configuration). After conducting a detailed analysis of the performance differences across different UI configurations, we give recommendations for the design of distributed user interfaces.


human factors in computing systems | 2013

Multi-touch rotation gestures: performance and ergonomics

Eve E. Hoggan; John Williamson; Antti Oulasvirta; Miguel A. Nacenta; Per Ola Kristensson; Anu Lehtiö

Rotations performed with the index finger and thumb involve some of the most complex motor action among common multi-touch gestures, yet little is known about the factors affecting performance and ergonomics. This note presents results from a study where the angle, direction, diameter, and position of rotations were systematically manipulated. Subjects were asked to perform the rotations as quickly as possible without losing contact with the display, and were allowed to skip rotations that were too uncomfortable. The data show surprising interaction effects among the variables, and help us identify whole categories of rotations that are slow and cumbersome for users.

Collaboration


Dive into the Miguel A. Nacenta's collaboration.

Top Co-Authors

Avatar

Carl Gutwin

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Regan L. Mandryk

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gonzalo Gabriel Méndez

Escuela Superior Politecnica del Litoral

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Uta Hinrichs

University of St Andrews

View shared research outputs
Researchain Logo
Decentralizing Knowledge