Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amy Isard is active.

Publication


Featured researches published by Amy Isard.


IEEE Intelligent Systems | 2003

Speaking the users' languages

Amy Isard; Jon Oberlander; Colin Matheson; Ion Androutsopoulos

The authors describe a system that generates descriptions of museum objects tailored to the user. The texts presented to adults, children, and experts differ in several ways, from the choice of words used to the complexity of the sentence forms. M-PIRO can currently generate text in three languages: English, Greek, and Italian. The grammar resources are language independent as much as possible. M-PIROs system architecture is significantly more modular than that of its predecessor ILEX. In particular, the linguistic resources, database, and user-modeling subsystems are now separate from the systems that perform the natural language generation and speech synthesis.


international conference on multimodal interfaces | 2012

Two people walk into a bar: dynamic multi-party social interaction with a robot agent

Mary Ellen Foster; Andre Gaschler; Manuel Giuliani; Amy Isard; Maria Pateraki; Ronald P. A. Petrick

We introduce a humanoid robot bartender that is capable of dealing with multiple customers in a dynamic, multi-party social setting. The robot system incorporates state-of-the-art components for computer vision, linguistic processing, state management, high-level reasoning, and robot control. In a user evaluation, 31 participants interacted with the bartender in a range of social situations. Most customers successfully obtained a drink from the bartender in all scenarios, and the factors that had the greatest impact on subjective satisfaction were task success and dialogue efficiency.


Speech Communication | 2001

The MATE workbench – An annotation tool for XML coded speech corpora

David McKelvie; Amy Isard; Andreas Mengel; Morten Baun Møller; Michael Grosse; Marion Klein

Abstract This paper describes the design and implementation of the MATE workbench, a program which provides support for the annotation of speech and text. It provides facilities for flexible display and editing of such annotations, and complex querying of a resulting corpus. The workbench offers a more flexible approach than most existing annotation tools, which were often designed with a specific annotation scheme in mind. Any annotation scheme can be used with the MATE workbench, provided it is coded using XML markup (linked to the speech signal, if available, using certain conventions). The workbench uses a transformation language to define specialised editors optimised for particular annotation tasks, with suitable display formats and allowable editing operations tailored to the task. The workbench is written in Java, which means that it is platform-independent. This paper outlines the design of the workbench software and compares it with other annotation programs.


international conference on natural language generation | 2006

Individuality and Alignment in Generated Dialogues

Amy Isard; Carsten Brockmann; Jon Oberlander

It would be useful to enable dialogue agents to project, through linguistic means, their individuality or personality. Equally, each member of a pair of agents ought to adjust its language (to a greater or lesser extent) to match that of its interlocutor. We describe CRAG, which generates dialogues between pairs of agents, who are linguistically distinguishable, but able to align. CRAG-2 makes use of OPENCCG and an over-generation and ranking approach, guided by a set of language models covering both personality and alignment. We illustrate with examples of output, and briefly note results from user studies with the earlier CRAG-1, indicating how CRAG-2 will be further evaluated. Related work is discussed, along with current limitations and future directions.


international conference on multimodal interfaces | 2013

Comparing task-based and socially intelligent behaviour in a robot bartender

Manuel Giuliani; Ronald P. A. Petrick; Mary Ellen Foster; Andre Gaschler; Amy Isard; Maria Pateraki; Markos Sigalas

We address the question of whether service robots that interact with humans in public spaces must express socially appropriate behaviour. To do so, we implemented a robot bartender which is able to take drink orders from humans and serve drinks to them. By using a high-level automated planner, we explore two different robot interaction styles: in the task only setting, the robot simply fulfils its goal of asking customers for drink orders and serving them drinks; in the socially intelligent setting, the robot additionally acts in a manner socially appropriate to the bartender scenario, based on the behaviour of humans observed in natural bar interactions. The results of a user study show that the interactions with the socially intelligent robot were somewhat more efficient, but the two implemented behaviour settings had only a small influence on the subjective ratings. However, there were objective factors that influenced participant ratings: the overall duration of the interaction had a positive influence on the ratings, while the number of system order requests had a negative influence. We also found a cultural difference: German participants gave the system higher pre-test ratings than participants who interacted in English, although the post-test scores were similar.


Speech Communication | 1997

SSML: a speech synthesis markup language

Paul Taylor; Amy Isard

Abstract This paper describes the Speech Synthesis Markup Language , SSML, which has been designed as a platform independent interface standard for speech synthesis systems. The paper discusses the need for standardisation in speech synthesizers and how this will help builders of systems make better use of synthesis. Next the features of SSML (based on SGML, standard generalised markup language) are discussed, and details of the Edinburgh SSML interpreter are given as a guide on how to implement an SSML-based synthesizer.


robot and human interactive communication | 2014

Handling uncertain input in multi-user human-robot interaction

Simon Keizer; Mary Ellen Foster; Andre Gaschler; Manuel Giuliani; Amy Isard; Oliver Lemon

In this paper we present results from a user evaluation of a robot bartender system which handles state uncertainty derived from speech input by using belief tracking and generating appropriate clarification questions. We present a combination of state estimation and action selection components in which state uncertainty is tracked and exploited, and compare it to a baseline version that uses standard speech recognition confidence score thresholds instead of belief tracking. The results suggest that users are served fewer incorrect drinks when the uncertainty is retained in the state.


Language, cognition and neuroscience | 2014

Task-based evaluation of context-sensitive referring expressions in human–robot dialogue

Mary Ellen Foster; Manuel Giuliani; Amy Isard

The standard referring-expression generation task involves creating stand-alone descriptions intended solely to distinguish a target object from its context. However, when an artificial system refers to objects in the course of interactive, embodied dialogue with a human partner, this is a very different setting; the references found in situated dialogue are able to take into account the aspects of the physical, interactive and task-level context, and are therefore unlike those found in corpora of stand-alone references. Also, the dominant method of evaluating generated references involves measuring corpus similarity. In an interactive context, though, other extrinsic measures such as task success and user preference are more relevant – and numerous studies have repeatedly found little or no correlation between such extrinsic metrics and the predictions of commonly used corpus-similarity metrics. To explore these issues, we introduce a humanoid robot designed to cooperate with a human partner on a joint construction task. We then describe the context-sensitive reference-generation algorithm that was implemented for use on this robot, which was inspired by the referring phenomena found in the Joint Construction Task corpus of human–human joint construction dialogues. The context-sensitive algorithm was evaluated through two user studies comparing it to a baseline algorithm, using a combination of objective performance measures and subjective user satisfaction scores. In both studies, the objective task performance and dialogue quality were found to be the same for both versions of the system; however, in both cases, the context-sensitive system scored more highly on subjective measures of interaction quality.


Journal of the Acoustical Society of America | 1991

Characterizing the change from casual to careful style in spontaneous speech.

Maxine Eskenazi; Amy Isard

An examination has been carried out to determine which elements speakers use when changing style from casual to careful speech in a dialog situation. If automatic characterization of what an individual chooses to use to be more clearly understood can be achieved, benefits for speech synthesizers would include clearer and more natural speech. The present study uses data collected in a ‘‘spontaneous’’ situation, where 12 subjects each had two scenarios to act out with a ‘‘Wizard’’ partner guiding the course of the dialog. At two crucial points, the Wizard pretended not to be able to hear the subject, saying ‘‘comment?’’ (‘‘what?’’). The subject’s exchanges just before and after the Wizard’s interjection were compared. Acoustic (voicing and formants, for example), phonological (number of segments and reduction phenomena), and prosodic (especially intensity and duration) elements have been examined. Results show which of these are modified when the change of style occurs. The study underlines the fact that th...


annual meeting of the special interest group on discourse and dialogue | 2014

Using Ellipsis Detection and Word Similarity for Transformation of Spoken Language into Grammatically Valid Sentences

Manuel Giuliani; Thomas Marschall; Amy Isard

When humans speak they often use grammatically incorrect sentences, which is a problem for grammar-based language processing methods, since they expect input that is valid for the grammar. We present two methods to transform spoken language into grammatically correct sentences. The first is an algorithm for automatic ellipsis detection, which finds ellipses in spoken sentences and searches in a combinatory categorial grammar for suitable words to fill the ellipses. The second method is an algorithm that computes the semantic similarity of two words using WordNet, which we use to find alternatives to words that are unknown to the grammar. In an evaluation, we show that the usage of these two methods leads to an increase of 38.64% more parseable sentences on a test set of spoken sentences that were collected during a human-robot interaction experiment.

Collaboration


Dive into the Amy Isard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Bell

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge