Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward C. Kaiser is active.

Publication


Featured researches published by Edward C. Kaiser.


international conference on multimodal interfaces | 2003

Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality

Edward C. Kaiser; Alex Olwal; David R. McGee; Hrvoje Benko; Andrea Corradini; Xiaoguang Li; Philip R. Cohen; Steven Feiner

We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a time-stamped history of the objects that intersect them to derive statistics for ranking potential referents. We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the successful 3D multimodal interpretations. An accompanying video demonstrates the system in action.


international conference on computer communications | 2005

Design and implementation of network puzzles

Wu-chi Feng; Edward C. Kaiser; A. Luu

Client puzzles have been proposed in a number of protocols as a mechanism for mitigating the effects of distributed denial of service (DDoS) attacks. In order to provide protection against simultaneous attacks across a wide range of applications and protocols, however, such puzzles must be placed at a layer common to all of them; the network layer. Placing puzzles at the IP layer fundamentally changes the service paradigm of the Internet, allowing any device within the network to push load back onto those it is servicing. An advantage of network layer puzzles over previous puzzle mechanisms is that they can be applied to all traffic from malicious clients, making it possible to defend against arbitrary attacks as well as making previously voluntary mechanisms mandatory. In this paper, we outline goals which must be met for puzzles to be deployed effectively at the network layer. We then describe the design, implementation, and evaluation of a system that meets these goals by supporting efficient, fine-grained control of puzzles at the network layer. In particular, we describe modifications to existing puzzle protocols that allow them to work at the network layer, a hint-based hash-reversal puzzle that allows for the generation and verification of fine-grained puzzles at line speed in the fast path of high-speed routers, and an iptables implementation that supports transparent deployment at arbitrary locations in the network.


acm multimedia | 2003

Panoptes: scalable low-power video sensor networking technologies

Wu-chi Feng; Brian Code; Edward C. Kaiser; Mike Shea; Wu-chang Feng; Louis Bavoil

This demonstration will show the video sensor networking technologies developed at the OGI School of Science and Engineering. The general purpose video sensors allow programmers to create application-specific filtering, power management, and event triggering mechanisms. The demo will show a handful of video sensors operating under a variety of conditions including intermittent network connectivity as one might see in an environmental observation application.


international conference on multimodal interfaces | 2004

A multimodal learning interface for sketch, speak and point creation of a schedule chart

Edward C. Kaiser; David Demirdjian; Alexander Gruenstein; Xiaoguang Li; John Niekrasz; Matt Wesson; Sanjeev Kumar

We present a video demonstration of an agent-based test bed application for ongoing research into multi-user, multimodal, computer-assisted meetings. The system tracks a two person scheduling meeting: one person standing at a touch sensitive whiteboard creating a Gantt chart, while another person looks on in view of a calibrated stereo camera. The stereo camera performs real-time, untethered, vision-based tracking of the onlookers head, torso and limb movements, which in turn are routed to a 3D-gesture recognition agent. Using speech, 3D deictic gesture and 2D object de-referencing the system is able to track the onlookers suggestion to move a specific milestone. The system also has a speech recognition agent capable of recognizing out-of-vocabulary (OOV) words as phonetic sequences. Thus when a user at the whiteboard speaks an OOV label name for a chart constituent while also writing it, the OOV speech is combined with letter sequences hypothesized by the handwriting recognizer to yield an orthography, pronunciation and semantics for the new label. These are then learned dynamically by the system and become immediately available for future recognition.


international conference on acoustics speech and signal processing | 1999

PROFER: predictive, robust finite-state parsing for spoken language

Edward C. Kaiser; Michael Johnston; Peter A. Heeman

The natural language processing component of a speech understanding system is commonly a robust, semantic parser, implemented as either a chart-based transition network, or as a generalized left-right (GLR) parser. In contrast, we are developing a robust, semantic parser that is a single, predictive finite-state machine. Our approach is motivated by our belief that such a finite-state parser can ultimately provide an efficient vehicle for tightly integrating higher-level linguistic knowledge into speech recognition. We report on our development of this parser, with an example of its use, and a description of how it compares to both finite-state predictors and chart-based semantic parsers, while combining the elements of both.


network and system support for games | 2008

Stealth measurements for cheat detection in on-line games

Wu-chang Feng; Edward C. Kaiser; Travis T. Schluessler

As a result of physically owning the client machine, cheaters in network games currently have the upper-hand when it comes to avoiding detection by anti-cheat software. To address this problem and turn the tables on cheaters, this paper examines an approach for cheat detection based on the use of stealth measurements via tamper-resistant hardware. To support this approach, we examine a range of cheat methods and a number of measurements that such hardware could perform to detect them.


international conference on multimodal interfaces | 2006

Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations

Edward C. Kaiser

New language constantly emerges from complex, collaborative human-human interactions like meetings -- such as, for instance, when a presenter handwrites a new term on a whiteboard while saying it. Fixed vocabulary recognizers fail on such new terms, which often are critical to dialogue understanding. We present a proof-of-concept multimodal system that combines information from handwriting and speech recognition to learn the spelling, pronunciation and semantics of out-of-vocabulary terms from single instances of redundant multimodal presentation (e.g. saying a term while handwriting it). For the task of recognizing the spelling and semantics of abbreviated Gantt chart labels across a held-out test series of five scheduling meetings we show a significant relative error rate reduction of 37% when our learning methods are used and allowed to persist across the meeting series, as opposed to when they are not used.


international conference on multimodal interfaces | 2006

Collaborative multimodal photo annotation over digital paper

Paulo Barthelmess; Edward C. Kaiser; Xiao Huang; David McGee; Philip R. Cohen

The availability of metadata annotations over media content such as photos is known to enhance retrieval and organization, particularly for large data sets. The greatest challenge for obtaining annotations remains getting users to perform the large amount of tedious manual work that is required.In this paper we introduce an approach for semi-automated labeling based on extraction of metadata from naturally occurring conversations of groups of people discussing pictures among themselves.As the burden for structuring and extracting metadata is shifted from users to the system, new recognition challenges arise. We explore how multimodal language can help in 1) detecting a concise set of meaningful labels to be associated with each photo, 2) achieving robust recognition of these key semantic terms, and 3) facilitating label propagation via multimodal shortcuts. Analysis of the data of a preliminary pilot collection suggests that handwritten labels may be highly indicative of the semantics of each photo, as indicated by the correlation of handwritten terms with high frequency spoken ones. We point to initial directions exploring a multimodal fusion technique to recover robust spelling and pronunciation of these high-value terms from redundant speech and handwriting.


network and system support for games | 2009

PlayerRating: A reputation system for multiplayer online games

Edward C. Kaiser; Wu-chang Feng

In multiplayer online games, players interact with each other using aliases which unfortunately enable antisocial behavior. Vague rules and limited policing mean that only the very worst offenders are ever disciplined. This paper presents PlayerRating, a distributed reputation system specifically designed for online games. It leverages the prior experiences of a players peers to determine the reputability of all other peers, allowing well-behaved players to safely congregate and avoid interaction with antisocial peers. The system has been implemented as an interface add-on for the game World of Warcraft and is evaluated theoretically and experimentally.


Proceedings of the 1st ACM international workshop on Human-centered multimedia | 2006

Human-centered collaborative interaction

Paulo Barthelmess; Edward C. Kaiser; Rebecca Lunsford; David McGee; Philip R. Cohen; Sharon Oviatt

Recent years have witnessed an increasing shift in interest from single user multimedia/multimodal interfaces towards support for interaction among groups of people working closely together, e.g. during meetings or problem solving sessions. However, the introduction of technology to support collaborative practices has not been devoid of problems. It is not uncommon that technology meant to support collaboration may introduce disruptions and reduce group effectiveness.Human-centered multimedia and multimodal approaches hold a promise of providing substantially enhanced user experiences by focusing attention on human perceptual and motor capabilities, and on actual user practices. In this paper we examine the problem of providing effective support for collaboration, focusing on the role of human-centered approaches that take advantage of multimodality and multimedia. We show illustrative examples that demonstrate human-centered multimodal and multimedia solutions that provide mechanisms for dealing with the intrinsic complexity of human-human interaction support.

Collaboration


Dive into the Edward C. Kaiser's collaboration.

Top Co-Authors

Avatar

Paulo Barthelmess

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Wu-chang Feng

Portland State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wu-chi Feng

Portland State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronald A. Cole

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge