Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward Tse is active.

Publication


Featured researches published by Edward Tse.


conference on computability in europe | 2007

Multimodal multiplayer tabletop gaming

Edward Tse; Saul Greenberg; Chia Shen; Clifton Forlines

There is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this article we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view; (b) rich whole-handed gesture input usually seen only when handling physical objects; (c) the ability to monitor how others use space and access objects on the surface; and (d) the ability to communicate with each other and interact on top of the surface via gestures and verbal utterance. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole-hand gestures. We transform two commercial single-player computer games, representing a strategy and simulation game genre, to work within this setting.


advanced visual interfaces | 2006

Enabling interaction with single user applications through speech and gestures on a multi-user tabletop

Edward Tse; Chia Shen; Saul Greenberg; Clifton Forlines

Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with artefacts on the table and communicate with one another. With the advent of large multi-touch surfaces, developers are now applying this knowledge to create appropriate technical innovations in digital table design. Yet they are limited by the difficulty of building a truly useful collaborative application from the ground up. In this paper, we circumvent this difficulty by: (a) building a multimodal speech and gesture engine around the Diamond Touch multi-user surface, and (b) wrapping existing, widely-used off-the-shelf single-user interactive spatial applications with a multimodal interface created from this engine. Through case studies of two quite different geospatial systems -- Google Earth and Warcraft III -- we show the new functionalities, feasibility and limitations of leveraging such single-user applications within a multi user, multimodal tabletop. This research informs the design of future multimodal tabletop applications that can exploit single-user software conveniently available in the market. We also contribute (1) a set of technical and behavioural affordances of multimodal interaction on a tabletop, and (2) lessons learnt from the limitations of single user applications.


conference on computer supported cooperative work | 2004

Avoiding interference: how people use spatial separation and partitioning in SDG workspaces

Edward Tse; Jonathan Histon; Stacey D. Scott; Saul Greenberg

Single Display Groupware (SDG) lets multiple co-located people, each with their own input device, interact simultaneously over a single communal display. While SDG is beneficial, there is risk of interference: when two people are interacting in close proximity, one person can raise an interface component (such as a menu, dialog box, or movable palette) over another persons working area, thus obscuring and hindering the others actions. Consequently, researchers have developed special purpose interaction components to mitigate interference techniques. Yet is interference common in practice? If not, then SDG versions of conventional interface components could prove more suitable. We hypothesize that collaborators spatially separate their activities to the extent that they partition their workspace into distinct areas when working on particular tasks, thus reducing the potential for interference. We tested this hypothesis by observing co-located people performing a set of collaborative drawing exercises in an SDG workspace, where we paid particular attention to the locations of their simultaneous interactions. We saw that spatial separation and partitioning occurred consistently and naturally across all participants, rarely requiring any verbal negotiation. Particular divisions of the space varied, influenced by seating position and task semantics. These results suggest that people naturally avoid interfering with one another by spatially separating their actions. This has design implications for SDG interaction techniques, especially in how conventional widgets can be adapted to an SDG setting.


designing interactive systems | 2008

Exploring true multi-user multimodal interaction over a digital table

Edward Tse; Saul Greenberg; Chia Shen; Clifton Forlines; Ryo Kodama

True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system. First, parallel work is affected by the design of multimodal commands. Second, individual mode switches can be confusing to collaborators, especially if speech commands are used. Third, establishing personal and group territories can hinder particular tasks that require artefact neutrality. Finally, timing needs to be considered when designing joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.


human factors in computing systems | 2007

How pairs interact over a multimodal digital table

Edward Tse; Chia Shen; Saul Greenberg; Clifton Forlines

Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.


Archive | 2009

Collaborative Tabletop Research and Evaluation

Chia Shen; Kathy Ryall; Clifton Forlines; Alan Esenther; Frédéric Vernier; Katherine Everitt; Mike Wu; Daniel Wigdor; Meredith Ringel Morris; Mark S. Hancock; Edward Tse

Tables provide a large and natural interface for supporting direct manipulation of visual content, for human-to-human interactions and for collaboration, coordination, and parallel problem solving. However, the direct-touch table metaphor also presents considerable challenges, including the need for input methods that transcend traditional mouse- and keyboard-based designs.


ieee international workshop on horizontal interactive human computer systems | 2007

Multimodal Split View Tabletop Interaction Over Existing Applications

Edward Tse; Saul Greenberg; Chia Shen; John Barnwell; Sam Shipman; Darren Leigh

While digital tables can be used with existing applications, they are typically limited by the one user per computer assumption of current operating systems. In this paper, we explore multimodal split view interaction - a tabletop whose surface is split into two adjacent projected views - that leverages how people can interact with three types of existing applications in this setting. Independent applications let people see and work on separate systems. Shared screens let people see a twinned view of a single user application. True groupware lets people work in parallel over large digital workspaces. Atop these, we add multimodal speech and gesture interaction capability to enhance interpersonal awareness during loosely coupled work.


Archive | 2005

Supporting Lightweight Customization for Meeting Environments

Edward Tse; Saul Greenberg

Figure 1. The pilot study setup ABSTRACT Digital wall-sized displays commonly support authoring and presentation in face to face meetings. Yet most meeting applications show not only meeting content (i.e., the material being developed) but authoring tools as well – the usual controls, palettes, and menus. Attendees are distracted when the author navigates the (usually complex) interface as part of the authoring process the tools themselves unnecessarily clutter the display. The problem is that current customization techniques are not suited for meeting environments as complex customization interfaces take attention away from the meeting agenda thus making customization a socially unacceptable practice.


IEEE Computer Graphics and Applications | 2006

Informing the Design of Direct-Touch Tabletops

Chia Shen; Kathy Ryall; Clifton Forlines; Alan Esenther; Frédéric Vernier; Katherine Everitt; Mike Wu; Daniel Wigdor; Meredith Ringel Morris; Mark S. Hancock; Edward Tse


australasian user interface conference | 2004

Rapidly prototyping Single Display Groupware through the SDGToolkit

Edward Tse; Saul Greenberg

Collaboration


Dive into the Edward Tse's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clifton Forlines

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mike Wu

University of Toronto

View shared research outputs
Top Co-Authors

Avatar

Alan Esenther

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kathy Ryall

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Frédéric Vernier

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge