Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer Lai is active.

Publication


Featured researches published by Jennifer Lai.


meeting of the association for computational linguistics | 1991

ALIGNING SENTENCES IN PARALLEL CORPORA

Peter F. Brown; Jennifer Lai; Robert L. Mercer

In this paper we describe a statistical technique for aligning sentences with their translations in two parallel corpora. In addition to certain anchor points that are available in our data, the only information about the sentences that we use for calculating alignments is the number of tokens that they contain. Because we make no use of the lexical details of the sentence, the alignment computation is fast and therefore practical for application to very large collections of text. We have used this technique to align several million sentences in the English-French Hansard corpora and have achieved an accuracy in excess of 99% in a random selected set of 1000 sentence pairs that we checked by hand. We show that even without the benefit of anchor points the correlation between the lengths of aligned sentences is strong enough that we should expect to achieve an accuracy of between 96% and 97%. Thus, the technique may be applicable to a wider variety of texts than we have yet tried.


Communications of The ACM | 2004

Guidelines for multimodal user interface design

Leah Reeves; Jennifer Lai; James A. Larson; Sharon L. Oviatt; T. S. Balaji; Stéphanie Buisine; Penny Collings; Philip R. Cohen; Ben J. Kraal; Jean-Claude Martin; Michael F. McTear; Thiru Vilwamalai Raman; Kay M. Stanney; Hui Su; Qian Ying Wang

JMUI (Journal on Multimodal User Interfaces), Special issue “Best of affective computing and intelligent Guidelines for multimodal user interface design. support, human multi-modal information processing. characteristics to the design of a user-oriented and guidelines of multimodal interface design. Artifact lifecycle management, Consumer and user, Interfaces in Automated.Aug 2 Aug 7Los Angeles, CA, USAThursday, 6 August 2015 / HCI International 20152015.hci.international/thursday​CachedDefining and Optimizing User Interfaces Information Complexity for AI Design and Development of Multimodal Applications: A Vision on Key Issues and Traditional Heuristics and Industry Guidelines to Evaluate Multimodal Digital Artifacts


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2004

Presence versus availability: the design and evaluation of a context-aware communication client

James Fogarty; Jennifer Lai; Jim Christensen

Although electronic communication plays an important role in the modern workplace, the interruptions created by poorly-timed attempts to communicate are disruptive. Prior work suggests that sharing an indication that a person is currently busy might help to prevent such interruptions, because people could wait for a person to become available before attempting to initiate communication. We present a context-aware communication client that uses the built-in microphones of laptop computers to sense nearby speech. Combining this speech detection sensor data with location, computer, and calendar information, our system models availability for communication, a concept that is distinct from the notion of presence found in widely-used systems. In a 4 week study of the system with 26 people, we examined the use of this additional context. To our knowledge, this is the first-field study to quantitatively examine how people use automatically sensed context and availability information to make decisions about when and how to communicate with colleagues. Participants appear to have used the provided context to as an indication of presence, rather than considering availability. Our results raise the interesting question of whether sharing an indication that a person is currently unavailable will actually reduce inappropriate interruptions.


human factors in computing systems | 2004

Examining the robustness of sensor-based statistical models of human interruptibility

James Fogarty; Scott E. Hudson; Jennifer Lai

Current systems often create socially awkward interruptions or unduly demand attention because they have no way of knowing if a person is busy and should not be interrupted. Previous work has examined the feasibility of using sensors and statistical models to estimate human interruptibility in an office environment, but left open some questions about the robustness of such an approach. This paper examines several dimensions of robustness in sensor-based statistical models of human interruptibility. We show that real sensors can be constructed with sufficient accuracy to drive the predictive models. We also create statistical models for a much broader group of people than was studied in prior work. Finally, we examine the effects of training data quantity on the accuracy of these models and consider tradeoffs associated with different combinations of sensors. As a whole, our analyses demonstrate that sensor-based statistical models of human interruptibility can provide robust estimates for a variety of office workers in a range of circumstances, and can do so with accuracy as good as or better than people. Integrating these models into systems could support a variety of advances in human computer interaction and computer-mediated communication.


human factors in computing systems | 1997

MedSpeak: report creation with continuous speech recognition

Jennifer Lai; John Vergo

MedSpeakiRadiology is a product that allows radiologists to create, edit and manage reports using real-time, continuous speech recognition. Speech is used both to navigate through the application, and to dictate reports. The system is multi-modal, accepting input by either voice, mouse or keyboard. This paper reports on how we addressed the critical user need of high throughput in our interface design, and ways of supporting both error prevention and error correction with continuous speech. User studies suggest that for this task there was low tolerance for accuracy less than 100?40,but the additional time required for corrections was considered by many radiologists to be acceptable in view of the overall reduction in report turn around time.


human factors in computing systems | 2001

On the road and on the Web?: comprehension of synthetic and human speech while driving

Jennifer Lai; Karen Cheng; Paul Green; Omer Tsimhoni

In this study 24 participants drove a simulator while listening to three types of messages in both synthesized speech and recorded human speech. The messages consisted of short navigation messages, medium length (approximately 100 words) email messages, and longer news stories (approximately 200 words). After each message the participant was presented with a series of multiple choice questions to measure comprehension of the message. Driving performance was recorded. Findings show that for the low driving workload conditions in the study, (cruise control, predictable two-lane road with no intersections, invariant lead car) driving performance was not affected by listening to messages. This was true for both the synthesized speech and natural speech. Comprehension of messages in synthetic speech was significantly lower than for recorded human speech for all message types.


human factors in computing systems | 2001

Shall we mix synthetic speech and human speech?: impact on users' performance, perception, and attitude

Li Gong; Jennifer Lai

Because it is impractical to record human voice for ever-changing dynamic content such as email messages and news, many commercial speech applications use human speech for fixed prompts and synthetic speech (TTS) for the dynamic content. However, this mixing approach may not be optimal from a consistency perspective. A 2-condition between-group experiment (N = 24) was conducted to compare two versions of a virtual-assistant interface (mixing human voice and TTS vs. TTS-only). Users interacted with the virtual assistant to manage some email and calendar tasks. Their task performance, self-perception of task performance, and attitudinal responses were measured. Users interacting with the TTS-only interface performed the task significantly better, while users interacting with the mixed-voices interface thought they did better and had more positive attitudinal responses. Explanations and design implications are suggested.


human factors in computing systems | 2000

The effect of task conditions on the comprehensibility of synthetic speech

Jennifer Lai; David Wood; Michael Considine

A study was conducted with 78 subjects to evaluate the comprehensibility of synthetic speech for various tasks ranging from short, simple e-mail messages to longer news articles on mostly obscure topics. Comprehension accuracy for each subject was measured for synthetic speech and for recorded human speech. Half the subjects were allowed to take notes while listening, the other half were not. Findings show that there was no significant difference in comprehension of synthetic speech among the five different text-to-speech engines used. Those subjects that did not take notes performed significantly worse for all synthetic voice tasks when compared to recorded speech tasks. Performance for synthetic speech in the non note-taking condition degraded as the task got longer and more complex. When taking notes, subjects also did significantly worse within the synthetic voice condition averaged across all six tasks. However, average performance scores for the last three tasks in this condition show comparable results for human and synthetic speech, reflective of a training effect.


international conference on multimedia and expo | 2009

Who is the expert? Analyzing gaze data to predict expertise level in collaborative applications

Yan Liu; Pei-Yun Hsueh; Jennifer Lai; Mirweis Sangin; Marc-Antoine Nüssli; Pierre Dillenbourg

In this paper, we analyze complex gaze tracking data in a collaborative task and apply machine learning models to automatically predict skill-level differences between participants. Specifically, we present findings that address the two primary challenges for this prediction task: (1) extracting meaningful features from the gaze information, and (2) casting the prediction task as a machine learning (ML) problem. The results show that our approach based on profile hidden Markov models are up to 96% accurate and can make the determination as fast as one minute into the collaboration, with only 5% of gaze observations registered. We also provide a qualitative analysis of gaze patterns that reveal the relative expertise level of the paired users in a collaborative learning user study.


human factors in computing systems | 2004

Facilitating mobile communication with multimodal access to email messages on a cell phone

Jennifer Lai

This paper reports on a user trial (N=17) that compares the use of two systems for accessing email messages on a telephone handset. The first system uses graphic output and telephone keypad input, while the second system has both graphic and speech output, with keypad and speech as input. To our knowledge, this trial represents the first evaluation of a fully functioning multimodal system that uses natural language understanding on a phone, and was dependent on the 3G network currently available in Australia. Participants saw significantly greater value in the multimodal interaction, and rated their experience with the multimodal system significantly more positively than the unimodal system. They were also significantly more inclined to use and recommend the multimodal system over the current unimodal product offering. While we expected to see some mixed usage of modalities in the multimodal system, participants used speech predominantly, falling back to GUI selection only after encountering multiple speech recognition failures in a row.

Collaboration


Dive into the Jennifer Lai's collaboration.

Researchain Logo
Decentralizing Knowledge