Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arash Eshghi is active.

Publication


Featured researches published by Arash Eshghi.


Computer Supported Cooperative Work archive | 2008

Communication Spaces

Patrick G. T. Healey; Graham White; Arash Eshghi; Ahmad J. Reeves; Ann Light

Concepts of space are fundamental to our understanding of human action and interaction. The common sense concept of uniform, metric, physical space is inadequate for design. It fails to capture features of social norms and practices that can be critical to the success of a technology. The concept of ‘place’ addresses these limitations by taking account of the different ways a space may be understood and used. This paper argues for the importance of a third concept: communication space. Motivated by Heidegger’s discussion of ‘being-with’ this concept addresses differences in interpersonal ‘closeness’ or mutual-involvement that are a constitutive feature of human interaction. We apply the concepts of space, place and communication space to the analysis of a corpus of interactions from an online community, ‘Walford’, which has a rich communicative ecology. A novel measure of sequential integration of conversational turns is proposed as an index of mutal-involvement. We demonstrate systematic differences in mutual-involvement that cannot be accounted for in terms of space or place and conclude that a concept of communication space is needed to address the organisation of human encounters in this community.


annual meeting of the special interest group on discourse and dialogue | 2016

Training an adaptive dialogue policy for interactive learning of visually grounded word meanings

Yanchao Yu; Arash Eshghi; Oliver Lemon

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor. The system integrates an incremental, semantic parsing/generation framework - Dynamic Syntax and Type Theory with Records (DS-TTR) - with a set of visual classifiers that are learned throughout the interaction and which ground the meaning representations that it produces. We use this system in interaction with a simulated human tutor to study the effects of different dialogue policies and capabilities on the accuracy of learned meanings, learning rates, and efforts/costs to the tutor. We show that the overall performance of the learning agent is affected by (1) who takes initiative in the dialogues; (2) the ability to express/use their confidence level about visual attributes; and (3) the ability to process elliptical and incrementally constructed dialogue turns. Ultimately, we train an adaptive dialogue policy which optimises the trade-off between classifier accuracy and tutoring costs.


Behavioral and Brain Sciences | 2013

Well, that's one way: interactivity in parsing and production.

Christine Howes; Patrick G. T. Healey; Arash Eshghi; Julian Hough

We present empirical evidence from dialogue that challenges some of the key assumptions in the Pickering & Garrod (P&G) model of speaker-hearer coordination in dialogue. The P&G model also invokes an unnecessarily complex set of mechanisms. We show that a computational implementation, currently in development and based on a simpler model, can account for more of this type of dialogue data.


constraint solving and language processing | 2012

Probabilistic Grammar Induction in an Incremental Semantic Framework

Arash Eshghi; Matthew Purver; Julian Hough; Yo Sato

We describe a method for learning an incremental semantic grammar from a corpus in which sentences are paired with logical forms as predicate-argument structure trees. Working in the framework of Dynamic Syntax, and assuming a set of generally available compositional mechanisms, we show how lexical entries can be learned as probabilistic procedures for the incremental projection of semantic structure, providing a grammar suitable for use in an incremental probabilistic parser. By inducing these from a corpus generated using an existing grammar, we demonstrate that this results in both good coverage and compatibility with the original entries, without requiring annotation at the word level. We show that this semantic approach to grammar induction has the novel ability to learn the syntactic and semantic constraints on pronouns.


annual meeting of the special interest group on discourse and dialogue | 2008

Quantifying Ellipsis in Dialogue: an index of mutual understanding

Marcus Colman; Arash Eshghi; Patrick G. T. Healey

This paper presents a coding protocol that allows naive users to annotate dialogue transcripts for anaphora and ellipsis. Cohens kappa statistic demonstrates that the protocol is sufficiently robust in terms of reliability. It is proposed that quantitative ellipsis data may be used as an index of mutual-engagement. Current and potential uses of ellipsis coding are described.


meeting of the association for computational linguistics | 2016

Interactively Learning Visually Grounded Word Meanings from a Human Tutor.

Yanchao Yu; Arash Eshghi; Oliver Lemon

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor. The system integrates an incremental, semantic parsing/generation framework Dynamic Syntax and Type Theory with Records (DS-TTR) with a set of visual classifiers that are learned throughout the interaction and which ground the meaning representations that it produces. We use this system in interaction with a simulated human tutor to study the effect of different dialogue policies and capabilities on accuracy of learned meanings, learning rates, and efforts/costs to the tutor. We show that the overall performance of the learning agent is affected by (1) who takes initiative in the dialogues; (2) the ability to express/use their confidence level about visual attributes; and (3) the ability to process elliptical as well as incrementally constructed dialogue turns.


meeting of the association for computational linguistics | 2017

Learning how to Learn: An Adaptive Dialogue Agent for Incrementally Learning Visually Grounded Word Meanings.

Yanchao Yu; Arash Eshghi; Oliver Lemon

We present an optimised multi-modal dialogue agent for interactive learning of visually grounded word meanings from a human tutor, trained on real human-human tutoring data. Within a life-long interactive learning period, the agent, trained using Reinforcement Learning (RL), must be able to handle natural conversations with human users and achieve good learning performance (accuracy) while minimising human effort in the learning process. We train and evaluate this system in interaction with a simulated human tutor, which is built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual learning task. The results show that: 1) The learned policy can coherently interact with the simulated user to achieve the goal of the task (i.e. learning visual attributes of objects, e.g. colour and shape); and 2) it finds a better trade-off between classifier accuracy and tutoring costs than hand-crafted rule-based policies, including ones with dynamic policies.


Theoretical Linguistics | 2017

Grammars as Mechanisms for Interaction: The Emergence of Language Games

Arash Eshghi; Oliver Lemon

In their article “Language as Mechanisms for Interaction” Kempson et al. have provided the research community with numerous real examples of complex dialogue phenomena – in particular various examples of split utterances. From the point of view of developers of real-world spoken dialogue systems (one of the perspectives that we will take in this commentary), this paper presents both a treasure-trove of examples for developers, and an important set of challenges for current work on implementing truly natural conversational interfaces. This paper is therefore of great value to developers of dialogue systems and conversational agents, in that it presents both some very challenging real data and a sustained argument for new ways of conceptualizing the traditional syntax/ semantics/pragmatics interfaces, which have dominated traditional computational linguistics. In this commentary, we hope to draw out some of the implications of the data and the arguments, and to explain some new research directions which are now underway in order to meet the various challenges that the paper presents. It is uncontentious that dialogue, the most common and natural setting for language acquisition and use, is highly fragmentary, full of interruptions, role changes, restarts, corrections, continuations and overlaps, without any of these necessarily respecting the boundaries of the sentence. Until the pioneering work of Ginzburg and Cooper (2004); Ginzburg (2012); Poesio and Rieser (2010) and few others, and indeed that of the authors of this fine work, this data had been largely ignored by traditional linguistics as instances of defective performance, relative to an abstract model of the ideal, competent speaker. This dialogue data has been notoriously difficult to capture within mainstream models of the syntax-semantics interface, as the authors of the target article (henceforth, Kempson et al.) themselves forcefully argue. This is further evidenced by the


international conference on natural language generation | 2016

Incremental Generation of Visually Grounded Language in Situated Dialogue (demonstration system)

Yanchao Yu; Arash Eshghi; Oliver Lemon

We present a multi-modal dialogue system for interactive learning of perceptually grounded word meanings from a human tutor (Yu et al., ). The system integrates an incremental, semantic, and bidirectional grammar framework – Dynamic Syntax and Type Theory with Records (DS-TTR1, (Eshghi et al., 2012; Kempson et al., 2001)) – with a set of visual classifiers that are learned throughout the interaction and which ground the semantic/contextual representations that it produces (c.f. Kennington & Schlangen (2015) where words, rather than semantic atoms, are grounded in visual classifiers). Our approach extends Dobnik et al. (2012) in integrating perception (vision in this case) and language within a single formal system: Type Theory with Records (TTR (Cooper, 2005)). The combination of deep semantic representations in TTR with an incremental grammar (Dynamic Syntax) allows for complex multi-turn dialogues to be parsed and generated (Eshghi et al., 2015). These include clarification interaction, corrections, ellipsis and utterance continuations (see e.g. the dialogue in Fig. 1).


empirical methods in natural language processing | 2015

Comparing Attribute Classifiers for Interactive Language Grounding

Yanchao Yu; Arash Eshghi; Oliver Lemon

We address the problem of interactively learning perceptually grounded word meanings in a multimodal dialogue system. We design a semantic and visual processing system to support this and illustrate how they can be integrated. We then focus on comparing the performance (Precision, Recall, F1, AUC) of three state-of-the-art attribute classifiers for the purpose of interactive language grounding (MLKNN, DAP, and SVMs), on the aPascal-aYahoo datasets. In prior work, results were presented for object classification using these methods for attribute labelling, whereas we focus on their performance for attribute labelling itself. We find that while these methods can perform well for some of the attributes (e.g. head, ears, furry) none of these models has good performance over the whole attribute set, and none supports incremental learning. This leads us to suggest directions for future work.

Collaboration


Dive into the Arash Eshghi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yanchao Yu

Heriot-Watt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Purver

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Patrick G. T. Healey

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yo Sato

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge