Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rachel Coulston is active.

Publication


Featured researches published by Rachel Coulston.


human factors in computing systems | 2005

Individual differences in multimodal integration patterns: what are they and why do they exist?

Sharon Oviatt; Rebecca Lunsford; Rachel Coulston

Techniques for information fusion are at the heart of multimodal system design. To develop new user-adaptive approaches for multimodal fusion, the present research investigated the stability and underlying cause of major individual differences that have been documented between users in their multimodal integration pattern. Longitudinal data were collected from 25 adults as they interacted with a map system over six weeks. Analyses of 1,100 multimodal constructions revealed that everyone had a dominant integration pattern, either simultaneous or sequential, which was 95-96% consistent and remained stable over time. In addition, coherent behavioral and linguistic differences were identified between these two groups. Whereas performance speed was comparable, sequential integrators made only half as many errors and excelled during new or complex tasks. Sequential integrators also had more precise articulation (e.g., fewer disfluencies), although their speech rate was no slower. Finally, sequential integrators more often adopted terse and direct command-style language, with a smaller and less varied vocabulary, which appeared focused on achieving error-free communication. These distinct interaction patterns are interpreted as deriving from fundamental differences in reflective-impulsive cognitive style. Implications of these findings are discussed for the design of adaptive multimodal systems with substantially improved performance characteristics.


international conference on multimodal interfaces | 2002

Multimodal interaction during multiparty dialogues: initial results

Philip R. Cohen; Rachel Coulston; Kelly Krout

Groups of people involved in collaboration on a task often incorporate the objects in their mutual environment into their discussion. With this comes physical reference to these 3-D objects, including: gesture, gaze, haptics, and possibly other modalities, over and above the speech we commonly associate with human-human communication. From a technological perspective, this human style of communication not only poses the challenge for researchers to create multimodal systems capable of integrating input from various modalities, but also to do it well enough that it supports, but does not interfere with the primary goal of the collaborators, which is their own human-human interaction. This paper offers a first step towards building such multimodal systems for supporting face-to-face collaborative work by providing both qualitative and quantitative analyses of multiparty multimodal dialogues in a field setting.


international conference on multimodal interfaces | 2004

Multimodal interaction under exerted conditions in a natural field setting

Sanjeev Kumar; Philip R. Cohen; Rachel Coulston

This paper evaluates the performance of a multimodal interface under exerted conditions in a natural field setting. The subjects in the present study engaged in a strenuous activity while multimodally performing map-based tasks using handheld computing devices. This activity made the users breathe heavily and become fatigued during the course of the study. We found that the performance of both speech and gesture recognizers degraded as a function of exertion, while the overall multimodal success rate was stable. This stabilization is accounted for by the mutual disambiguation of modalities, which increases significantly with exertion. The system performed better for subjects with a greater level of physical fitness, as measured by their running speed, with more stable multimodal performance and a later degradation of speech and gesture recognition as compared with subjects who were less fit. The findings presented in this paper have a significant impact on design decisions for multimodal interfaces targeted towards highly mobile and exerted users in field environments.


Journal of the Acoustical Society of America | 2003

Predicting children’s hyperarticulate speech during human‐computer error resolution

Sharon L. Oviatt; Rachel Coulston; Courtney Darves

When speaking to interactive systems, people sometimes hyperarticulate—or adopt a clarified form of speech that has been associated with increased recognition errors. The goal of the present study was to provide a comprehensive assessment of the type and magnitude of linguistic adaptations in children’s speech during human‐computer error resolution, and to compare these adaptations with those typical of adult hyperarticulation. A study was conducted in which twenty‐four 7‐ to‐ 10‐year‐old children interacted with a simulated conversational system, which permitted a comparison of their verbatim repetitions immediately before and after system recognition errors. Matched original‐repeat utterance pairs then were analyzed for acoustic, prosodic, and phonological adaptations. Like adult speech, the primary hyperarticulate changes in children’s speech included durational phenomena such as lengthening of pauses and the speech segment, and a more deliberate, hyper‐clear articulatory style. However, children’s spe...


international conference on multimodal interfaces | 2004

When do we interact multimodally?: cognitive load and multimodal communication patterns

Sharon Oviatt; Rachel Coulston; Rebecca Lunsford


international conference on multimodal interfaces | 2003

Toward a theory of organized multimodal integration patterns during human-computer interaction

Sharon Oviatt; Rachel Coulston; Stefanie Tomko; Benfang Xiao; Rebecca Lunsford; R. Matthews Wesson; Lesley M. Carmichael


ACM Transactions on Computer-Human Interaction | 2004

Toward adaptive conversational interfaces: Modeling speech convergence with animated personas

Sharon L. Oviatt; Courtney Darves; Rachel Coulston


conference of the international speech communication association | 2002

Amplitude convergence in children²s conversational speech with animated personas.

Rachel Coulston; Sharon L. Oviatt; Courtney Darves


international conference on multimodal interfaces | 2003

Modeling multimodal integration patterns and performance in seniors: toward adaptive processing of individual differences

Benfang Xiao; Rebecca Lunsford; Rachel Coulston; R. Matthews Wesson; Sharon Oviatt


international conference on multimodal interfaces | 2005

Audio-visual cues distinguishing self- from system-directed speech in younger and older adults

Rebecca Lunsford; Sharon Oviatt; Rachel Coulston

Collaboration


Dive into the Rachel Coulston's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge