Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Wigdor is active.

Publication


Featured researches published by Daniel Wigdor.


human factors in computing systems | 2007

Direct-touch vs. mouse input for tabletop displays

Clifton Forlines; Daniel Wigdor; Chia Shen; Ravin Balakrishnan

We investigate the differences -- in terms of bothquantitative performance and subjective preference -- between direct-touch and mouse input for unimanual andbimanual tasks on tabletop displays. The results of twoexperiments show that for bimanual tasks performed ontabletops, users benefit from direct-touch input. However,our results also indicate that mouse input may be moreappropriate for a single user working on tabletop tasksrequiring only single-point interaction.


user interface software and technology | 2007

Lucid touch: a see-through mobile device

Daniel Wigdor; Clifton Forlines; Patrick Baudisch; John C. Barnwell; Chia Shen

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a users fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the users hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.


user interface software and technology | 2003

TiltText : using tilt for text input to mobile phones

Daniel Wigdor; Ravin Balakrishnan

TiltText, a new technique for entering text into a mobile phone is described. The standard 12-button text entry keypad of a mobile phone forces ambiguity when the 26- letter Roman alphabet is mapped in the traditional manner onto keys 2-9. The TiltText technique uses the orientation of the phone to resolve this ambiguity, by tilting the phone in one of four directions to choose which character on a particular key to enter. We first discuss implementation strategies, and then present the results of a controlled experiment comparing TiltText to MultiTap, the most common text entry technique. The experiment included 10 participants who each entered a total of 640 phrases of text chosen from a standard corpus, over a period of about five hours. The results show that text entry speed including correction for errors using TiltText was 23% faster than MultiTap by the end of the experiment, despite a higher error rate for TiltText. TiltText is thus amongst the fastest known language-independent techniques for entering text into mobile phones.


user interface software and technology | 2004

Multi-finger gestural interaction with 3d volumetric displays

Tovi Grossman; Daniel Wigdor; Ravin Balakrishnan

Volumetric displays provide interesting opportunities and challenges for 3D interaction and visualization, particularly when used in a highly interactive manner. We explore this area through the design and implementation of techniques for interactive direct manipulation of objects with a 3D volumetric display. Motion tracking of the users fingers provides for direct gestural interaction with the virtual objects, through manipulations on and around the displays hemispheric enclosure. Our techniques leverage the unique features of volumetric displays, including a 360° viewing volume that enables manipulation from any viewpoint around the display, as well as natural and accurate perception of true depth information in the displayed 3D scene. We demonstrate our techniques within a prototype 3D geometric model building application.


advanced visual interfaces | 2008

Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces

Peter Brandl; Clifton Forlines; Daniel Wigdor; Michael Haller; Chia Shen

Many research projects have demonstrated the benefits of bimanual interaction for a variety of tasks. When choosing bimanual input, system designers must select the input device that each hand will control. In this paper, we argue for the use of pen and touch two-handed input, and describe an experiment in which users were faster and committed fewer errors using pen and touch input in comparison to using either touch and touch or pen and pen input while performing a representative bimanual task. We present design principles and an application in which we applied our design rationale toward the creation of a learnable set of bimanual, pen and touch input commands.


human factors in computing systems | 2009

WeSpace: the design development and deployment of a walk-up and share multi-surface visual collaboration system

Daniel Wigdor; Hao Jiang; Clifton Forlines; Michelle A. Borkin; Chia Shen

We present WeSpace -- a collaborative work space that integrates a large data wall with a multi-user multi-touch table. WeSpace has been developed for a population of scientists who frequently meet in small groups for data exploration and visualization. It provides a low overhead walk-up and share environment for users with their own personal applications and laptops. We present our year-long effort from initial ethnographic studies, to iterations of design, development and user testing, to the current experiences of these scientists carrying out their collaborative research in the WeSpace. We shed light on the utility, the value of the multi-touch table, the manifestation, usage patterns and the changes in their workflow that WeSpace has brought about.


conference on computer supported cooperative work | 2010

WeSearch: supporting collaborative search and sensemaking on a tabletop display

Meredith Ringel Morris; Jarrod Lombardo; Daniel Wigdor

Groups of users often have shared information needs -- for example, business colleagues need to conduct research relating to joint projects and students must work together on group homework assignments. In this paper, we introduce WeSearch, a collaborative Web search system designed to leverage the benefits of tabletop displays for face-to-face collaboration and organization tasks. We describe the design of WeSearch and explain the interactions it affords. We then describe an evaluation in which eleven groups used WeSearch to conduct real collaborative search tasks. Based on our studys findings, we analyze the effectiveness of the features introduced by WeSearch.


international world wide web conferences | 2002

Hunter gatherer: interaction support for the creation and management of within-web-page collections

m.c. schraefel; Yuxiang Zhu; David Modjeska; Daniel Wigdor; Shengdong Zhao

Hunter Gatherer is an interface that lets Web users carry out three main tasks: (1) collect components from within Web pages; (2) represent those components in a collection; (3) edit those component collections. Our research shows that while the practice of making collections of content from within Web pages is common, it is not frequent, due in large part to poor interaction support in existing tools. We engaged with users in task analysis as well as iterative design reviews in order to understand the interaction issues that are part of within-Web-page collection making and to design an interaction that would support that process.We report here on that design development, as well as on the evaluations of the tool that evolved from that process, and the future work stemming from these results, in which our critical question is: what happens to users perceptions and expectations of web-based information (their web-based information management practices) when they can treat this information as harvestable, recontextualizable data, rather than as fixed pages?


interactive tabletops and surfaces | 2009

ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures

Dustin Freeman; Hrvoje Benko; Meredith Ringel Morris; Daniel Wigdor

We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the users current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture (feedforward). Our experiment compared participants learning gestures with ShadowGuides to those learning with video-based instruction. We found that participants learning with ShadowGuides remembered more gestures and expressed significantly higher preference for the help system.


ieee international workshop on horizontal interactive human computer systems | 2006

Rotation and translation mechanisms for tabletop interaction

Mark S. Hancock; Frédéric Vernier; Daniel Wigdor; Sheelagh Carpendale; Chia Shen

A digital tabletop offers several advantages over other groupware form factors for collaborative applications. However, users of a tabletop system do not share a common perspective for the display of information: what is presented right side up to one participant is upside down for another. In this paper, we survey five different rotation and translation techniques for objects displayed on a direct touch digital tabletop display. We analyze their suitability for interactive tabletops in light of their respective input and output degrees of freedom, as well as the precision and completeness provided by each. We describe various tradeoffs that arise when considering which, when and where each of these techniques might be most useful.

Collaboration


Dive into the Daniel Wigdor's collaboration.

Top Co-Authors

Avatar

Clifton Forlines

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Esenther

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge