Rebecca Lunsford
Oregon Health & Science University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rebecca Lunsford.
human factors in computing systems | 2005
Sharon Oviatt; Rebecca Lunsford; Rachel Coulston
Techniques for information fusion are at the heart of multimodal system design. To develop new user-adaptive approaches for multimodal fusion, the present research investigated the stability and underlying cause of major individual differences that have been documented between users in their multimodal integration pattern. Longitudinal data were collected from 25 adults as they interacted with a map system over six weeks. Analyses of 1,100 multimodal constructions revealed that everyone had a dominant integration pattern, either simultaneous or sequential, which was 95-96% consistent and remained stable over time. In addition, coherent behavioral and linguistic differences were identified between these two groups. Whereas performance speed was comparable, sequential integrators made only half as many errors and excelled during new or complex tasks. Sequential integrators also had more precise articulation (e.g., fewer disfluencies), although their speech rate was no slower. Finally, sequential integrators more often adopted terse and direct command-style language, with a smaller and less varied vocabulary, which appeared focused on achieving error-free communication. These distinct interaction patterns are interpreted as deriving from fundamental differences in reflective-impulsive cognitive style. Implications of these findings are discussed for the design of adaptive multimodal systems with substantially improved performance characteristics.
international conference on multimodal interfaces | 2006
Alexander M. Arthur; Rebecca Lunsford; Matt Wesson; Sharon Oviatt
To support research and development of next-generation multimodal interfaces for complex collaborative tasks, a comprehensive new infrastructure has been created for collecting and analyzing time-synchronized audio, video, and pen-based data during multi-party meetings. This infrastructure needs to be unobtrusive and to collect rich data involving multiple information sources of high temporal fidelity to allow the collection and annotation of simulation-driven studies of natural human-human-computer interactions. Furthermore, it must be flexibly extensible to facilitate exploratory research. This paper describes both the infrastructure put in place to record, encode, playback and annotate the meeting-related media data, and also the simulation environment used to prototype novel system concepts.
international conference on multimodal interfaces | 2006
Rebecca Lunsford; Sharon Oviatt; Alexander M. Arthur
There currently is considerable interest in developing new open-microphone engagement techniques for speech and multimodal interfaces that perform robustly in complex mobile and multiparty field environments. State-of-the-art audio-visual open-microphone engagement systems aim to eliminate the need for explicit user engagement by processing more implicit cues that a user is addressing the system, which results in lower cognitive load for the user. This is an especially important consideration for mobile and educational interfaces due to the higher load required by explicit system engagement. In the present research, longitudinal data were collected with six triads of high-school students who engaged in peer tutoring on math problems with the aid of a simulated computer assistant. Results revealed that amplitude was 3.25dB higher when users addressed a computer rather than human peer when no lexical marker of intended interlocutor was present, and 2.4dB higher for all data. These basic results were replicated for both matched and adjacent utterances to computer versus human partners. With respect to dialogue style, speakers did not direct a higher ratio of commands to the computer, although such dialogue differences have been assumed in prior work. Results of this research reveal that amplitude is a powerful cue marking a speakers intended addressee, which should be leveraged to design more effective microphone engagement during computer-assisted multiparty interactions.
Proceedings of the 1st ACM international workshop on Human-centered multimedia | 2006
Paulo Barthelmess; Edward C. Kaiser; Rebecca Lunsford; David McGee; Philip R. Cohen; Sharon Oviatt
Recent years have witnessed an increasing shift in interest from single user multimedia/multimodal interfaces towards support for interaction among groups of people working closely together, e.g. during meetings or problem solving sessions. However, the introduction of technology to support collaborative practices has not been devoid of problems. It is not uncommon that technology meant to support collaboration may introduce disruptions and reduce group effectiveness.Human-centered multimedia and multimodal approaches hold a promise of providing substantially enhanced user experiences by focusing attention on human perceptual and motor capabilities, and on actual user practices. In this paper we examine the problem of providing effective support for collaboration, focusing on the role of human-centered approaches that take advantage of multimodality and multimedia. We show illustrative examples that demonstrate human-centered multimodal and multimedia solutions that provide mechanisms for dealing with the intrinsic complexity of human-human interaction support.
conference of the international speech communication association | 2016
Peter A. Heeman; Rebecca Lunsford; Andy McMillin; J. Scott Yaruss
In treating people who stutter, clinicians often have their clients read a story in order to determine their stuttering frequency. As the client is speaking, the clinician annotates each disfluency. For further analysis of the client’s speech, it is useful to have a word transcription of what was said. However, as these are realtime annotations, they are not always correct, and they usually lag where the actual disfluency occurred. We have built a tool that rescores a word lattice taking into account the clinician’s annotations. In the paper, we describe how we incorporate the clinician’s annotations, and the improvement over a baseline version. This approach of leveraging clinician annotations can be used for other clinical tasks where a word transcription is useful for further or richer analysis.
conference of the international speech communication association | 2016
Rebecca Lunsford; Peter A. Heeman; Emma Rennie
This paper examines the pauses, gaps and overlaps associated with turn-taking in order to better understand how people engage in this activity, which should lead to more natural and effective spoken dialogue systems. This paper makes three advances in studying these durations. First, we take into account the type of turn-taking event, carefully treating interruptions, dual starts, and delayed backchannels, as these can make it appear that turn-taking is more disorderly than it really is. Second, we do not view turn-transitions in isolation, but consider turntransitions and turn-continuations together, as equal alternatives of what could have occurred. Third, we use the distributions of turn-transition and turn-continuation offsets (gaps, overlaps, and pauses) to shed light on the extent to which turn-taking is negotiated by the two conversants versus controlled by the current speaker.
conference of the international speech communication association | 2016
Emma Rennie; Rebecca Lunsford; Peter A. Heeman
Although so is a recognized discourse marker, little work has explored its uses in turn-taking, especially when it is not followed by additional speech. In this paper we explore the use of the discourse marker so as it pertains to turn-taking and turnreleasing. Specifically, we compare the duration and intensity of so when used to take a turn, mid-utterance, and when releasing a turn. We found that durations of turn-retaining tokens are generally shorter than turn-releases; we also found that turnretaining tokens tend to be lower in intensity than the following speech. These trends of turn-taking behavior alongside certain lexical and prosodic features may prove useful for the development of speech-recognition software.
international conference on multimodal interfaces | 2004
Sharon Oviatt; Rachel Coulston; Rebecca Lunsford
international conference on multimodal interfaces | 2003
Sharon Oviatt; Rachel Coulston; Stefanie Tomko; Benfang Xiao; Rebecca Lunsford; R. Matthews Wesson; Lesley M. Carmichael
international conference on multimodal interfaces | 2003
Benfang Xiao; Rebecca Lunsford; Rachel Coulston; R. Matthews Wesson; Sharon Oviatt