Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ellen Gurman Bard is active.

Publication


Featured researches published by Ellen Gurman Bard.


Language | 1996

Magnitude Estimation of Linguistic Acceptability.

Ellen Gurman Bard; David Robertson; Antonella Sorace

Judgments of linguistic acceptability constitute an important source of evidence for theoretical and applied linguistics, but are typically elicited and represented in ways which limit their utility. This paper describes how MAGNITUDE ESTIIMATION, a technique used in psychophysics, can be adapted for eliciting acceptability judgments. Magnitude estimation of linguistic acceptability is shown to solve the measurement scale problems which plague conventional techniques; to provide data which make fine distinctions robustly enough to yield statistically significant results of linguistic interest; to be usable in a consistent way by linguistically naive speaker-hearers, and to allow replication across groups of subjects. Methodological pitfalls are discussed and suggestions are offered for new approaches to the analysis and measurement of linguistic acceptability.*


Cognitive Science | 2012

Behavior matching in multimodal communication is synchronized

Max M. Louwerse; Rick Dale; Ellen Gurman Bard; Patrick Jeuniaux

A variety of theoretical frameworks predict the resemblance of behaviors between two people engaged in communication, in the form of coordination, mimicry, or alignment. However, little is known about the time course of the behavior matching, even though there is evidence that dyads synchronize oscillatory motions (e.g., postural sway). This study examined the temporal structure of nonoscillatory actions-language, facial, and gestural behaviors-produced during a route communication task. The focus was the temporal relationship between matching behaviors in the interlocutors (e.g., facial behavior in one interlocutor vs. the same facial behavior in the other interlocutor). Cross-recurrence analysis revealed that within each category tested (language, facial, gestural), interlocutors synchronized matching behaviors, at temporal lags short enough to provide imitation of one interlocutor by the other, from one conversational turn to the next. Both social and cognitive variables predicted the degree of temporal organization. These findings suggest that the temporal structure of matching behaviors provides low-level and low-cost resources for human interaction.


Attention Perception & Psychophysics | 1988

The recognition of words after their acoustic offsets in spontaneous speech: Effects of subsequent context

Ellen Gurman Bard; Richard Shillcock; Gerry T. M. Altmann

Three experiments are presented that investigated the recognition of wordsafter their acoustic offsets in conversational speech. Utterances rand omly selected from the speech of 24 individuals (totalN=288) were gated in one-word increments and heard by 12 listeners each. of the successful recognitions, 21% occurred after the acoustic offset of the word in question and in the presence of subsequent context. The majority of late recognitions implicate subsequent context in the recognition process. Late recognitions were distributed nonrand omly with respect to the characteristics of the stimulus word tokens. Control experiments demonstrated that late recognitions were not artifacts of eliminating discourse context, of imposing artificial word boundaries, or of repeating words within successive gated presentations. The effects could be replicated only if subsequent context was available. The implications are discussed for models of word recognition in continuous speech.


human-robot interaction | 2008

The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue

Mary Ellen Foster; Ellen Gurman Bard; Markus Guhe; Robin L. Hill; Jon Oberlander; Alois Knoll

Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures. When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.


international conference on spoken language processing | 1996

The DCIEM map task corpus: spontaneous dialogue under sleep deprivation and drug treatment

Ellen Gurman Bard; Catherine Sotillo; Anne H. Anderson; M. Taylor

Abstract This paper describes a resource designed for the general study of spontaneous speech under the stress of sleep deprivation. It is a corpus of 216 unscripted task-oriented dialogues produced by normal adults in the course of a major sleep deprivation study. The study itself examined continuous task performance through baseline, sleepless and recovery periods by groups treated with placebo or one of two drugs (Modafinil, d-amphetamine) reputed to counter the effects of sleep deprivation. The dialogues were all produced while carrying out the route communication task used in the HCRC Map Task Corpus. Pairs of talkers collaborated to reproduce on one partners schematic map a route preprinted on the others. Controlled differences between the maps and use of labelled imaginary locations limit genre, vocabulary and effects of real-world knowledge. The designs for the construction of maps and the allocation of subjects to maps make the corpus a controlled elicitation experiment. Each talker participated in 12 dialogues over the course of the study. Preliminary examinations of dialogue length and task performance measures indicate effects of drug treatment, sleep deprivation and number of conversational partners. The corpus is available to researchers interested in all levels of speech and dialogue analysis, in both normal and stressed conditions.


Behavior Research Methods | 2010

Eyetracking for two-person tasks with manipulation of a virtual world.

Jean Carletta; Robin L. Hill; Craig Nicol; Tim Taylor; Jan de Ruiter; Ellen Gurman Bard

Eyetracking facilities are typically restricted to monitoring a single person viewing static images or prerecorded video. In the present article, we describe a system that makes it possible to study visual attention in coordination with other activity during joint action. The software links two eyetracking systems in parallel and provides an on-screen task. By locating eye movements against dynamic screen regions, it permits automatic tracking of moving on-screen objects. Using existing SR technology, the system can also cross-project each participant’s eyetrack and mouse location onto the other’s on-screen work space. Keeping a complete record of eyetrack and on-screen events in the same format as subsequent human coding, the system permits the analysis of multiple modalities. The software offers new approaches to spontaneous multimodal communication: joint action and joint attention. These capacities are demonstrated using an experimental paradigm for cooperative on-screen assembly of a two-dimensional model. The software is available under an open source license.


Journal of Child Language | 1994

The unintelligibility of speech to children: effects of referent availability

Ellen Gurman Bard; Anne H. Anderson

Speech addressed to children is supposed to be helpfully redundant, but redundant or predictable words addressed to adults tend to lose intelligibility. Word tokens extracted from the spontaneous speech of the parents of 12 children aged 1; 10 to 3; 0 and presented in isolation to adult listeners showed loss of intelligibility when the words were redundant because they had occurred in repetitions of an utterance (Experiment 1) or referred to an entity which was physically present when named (Experiment 2). Though children (N = 64; mean age 3; 5, S.D. 6.1 months) recognized fewer excerpted object names than adults (N = 40), less intelligible tokens appeared to induce child listeners to rely on the words extra-linguistic context during the recognition process (Experiment 3), much as such tokens normally induce adults to rely on discourse context. It is proposed that interpreting parental utterances with reference to non-verbal context furthers linguistic development.


Attention Perception & Psychophysics | 1997

Limited visual control of the intelligibility of speech in face-to-face dialogue

Anne H. Anderson; Ellen Gurman Bard; Catherine Sotillo; Alison Newlands; G. Doherty-Sneddon

Speakers are thought to articulate individual words in running speech less carefully whenever additional nonacoustic information can help listeners recognize what is said (Fowler & Housum, 1987; Lieberman, 1963). Comparing single words excerpted from spontaneous dialogues and control tokens of the same words read by the same speakers in lists, Experiment 1 yielded a significant but general effect of visual context: Tokens introducing 71 new entities in dialogues in which participants could see one another’s faces were more degraded (less intelligible to 54 naive listeners) than were tokens of the same words from dialogues with sight lines blocked. Loss of clarity was not keyed to momentto-moment visual behavior. Subjects with clear sight lines looked at each other too rarely to account for the observed effect. Experiment 2 revealed that tokens of 60 words uttered while subjects were looking at each other were significantly less degraded (in length and in intelligibility to 72 subjects) vis-à-vis controls than were spontaneous tokens of the same words produced when subjects were looking elsewhere. Intelligibility loss was mitigated only when listeners looked at speakers. Two separate visual effects are discussed, one of the global availability and the other of the local use of the interlocutor’s face.


human language technology | 1993

The HCRC Map Task corpus: natural dialogue for speech recognition

Henry S. Thompson; Anne H. Anderson; Ellen Gurman Bard; G. Doherty-Sneddon; Alison Newlands; Catherine Sotillo

The HCRC Map Task corpus has been collected and transcribed in Glasgow and Edinburgh, and recently published on CD-ROM. This effort was made possible by funding from the British Economic and Social Research Council.The corpus is composed of 128 two-person conversations in both high-quality digital audio and orthographic transcriptions, amounting to 18 hours and 150,000 words respectively.The experimental design is quite detailed and complex, allowing a number of different phonemic, syntactico-semantic and pragmatic contrasts to be explored in a controlled way.The corpus is a uniquely valuable resource for speech recognition research in particular, as we move from developing systems intended for controlled use by familiar users to systems intended for less constrained circumstances and naive or occasional users. Examples supporting this claim are given, including preliminary evidence of the phonetic consequences of second mention and the impact of different styles of referent negotiation on communicative efficacy.


Resuscitation | 2014

Dispatch-assisted CPR: Where are the hold-ups during calls to emergency dispatchers? A preliminary analysis of caller–dispatcher interactions during out-of-hospital cardiac arrest using a novel call transcription technique

Gareth Clegg; Richard Lyon; Scott James; Holly P. Branigan; Ellen Gurman Bard; Gerry Egan

BACKGROUND Survival from out-of-hospital cardiac arrest (OHCA) is dependent on the chain of survival. Early recognition of cardiac arrest and provision of bystander cardiopulmonary resuscitation (CPR) are key determinants of OHCA survival. Emergency medical dispatchers play a key role in cardiac arrest recognition and giving telephone CPR advice. The interaction between caller and dispatcher can influence the time to bystander CPR and quality of resuscitation. We sought to pilot the use of emergency call transcription to audit and evaluate the holdups in performing dispatch-assisted CPR. METHODS A retrospective case selection of 50 consecutive suspected OHCA was performed. Audio recordings of calls were downloaded from the emergency medical dispatch centre computer database. All calls were transcribed using proprietary software and voice dialogue was compared with the corresponding stage on the Medical Priority Dispatch System (MPDS). Time to progress through each stage and number of caller-dispatcher interactions were calculated. RESULTS Of the 50 downloaded calls, 47 were confirmed cases of OHCA. Call transcription was successfully completed for all OHCA calls. Bystander CPR was performed in 39 (83%) of these. In the remaining cases, the caller decided the patient was beyond help (n = 7) or the caller said that they were physically unable to perform CPR (n = 1). MPDS stages varied substantially in time to completion. Stage 9 (determining if the patient is breathing through airway instructions) took the longest time to complete (median = 59 s, IQR 22-82 s). Stage 11 (giving CPR instructions) also took a relatively longer time to complete compared to the other stages (median = 46 s, IQR 37-75 s). Stage 5 (establishing the patients age) took the shortest time to complete (median = 5.5s, IQR 3-9s). CONCLUSION Transcription of OHCA emergency calls and caller-dispatcher interaction compared to MPDS stage is feasible. Confirming whether a patient is breathing and completing CPR instructions required the longest time and most interactions between caller and dispatcher. Use of call transcription has the potential to identify key factors in caller-dispatcher interaction that could improve time to CPR and further research is warranted in this area.

Collaboration


Dive into the Ellen Gurman Bard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alice Turk

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Guhe

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge