Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth Black is active.

Publication


Featured researches published by Elizabeth Black.


scalable uncertainty management | 2009

An Argument-Based Approach to Using Multiple Ontologies

Elizabeth Black; Anthony Hunter; Jeff Z. Pan

Logic-based argumentation offers an approach to querying and revising multiple ontologies that are inconsistent or incoherent. A common assumption for logic-based argumentation is that an argument is a pair ****** ,*** *** where *** is a minimal subset of the knowledgebase such that *** is consistent and *** entails the claim *** . Using dialogue games, agents (each with its own ontology) can exchange arguments and counterarguments concerning formulae of interest. In this paper, we present a novel framework for logic-based argumentation with ontological knowledge. As far as we know, this is the first proposal for argumentation with multiple ontologies via dialogues. It allows two agents to discuss the answer to queries concerning their knowledge (even if it is inconsistent) without one agent having to copy all of their ontology to the other, and without the other agent having to expend time and effort merging that ontology with theirs. Furthermore, it offers the potential for the agents to incrementally improve their knowledge based on the dialogue by checking how it differs from the other agents.


Autonomous Agents and Multi-Agent Systems | 2009

An inquiry dialogue system

Elizabeth Black; Anthony Hunter

The majority of existing work on agent dialogues considers negotiation, persuasion or deliberation dialogues; we focus on inquiry dialogues, which allow agents to collaborate in order to find new knowledge. We present a general framework for representing dialogues and give the details necessary to generate two subtypes of inquiry dialogue that we define: argument inquiry dialogues allow two agents to share knowledge to jointly construct arguments; warrant inquiry dialogues allow two agents to share knowledge to jointly construct dialectical trees (essentially a tree with an argument at each node in which a child node is a counter argument to its parent). Existing inquiry dialogue systems only model dialogues, meaning they provide a protocol which dictates what the possible legal next moves are but not which of these moves to make. Our system not only includes a dialogue-game style protocol for each subtype of inquiry dialogue that we present, but also a strategy that selects exactly one of the legal moves to make. We propose a benchmark against which we compare our dialogues, being the arguments that can be constructed from the union of the agents’ beliefs, and use this to define soundness and completeness properties that we show hold for all inquiry dialogues generated by our system.


Journal of Logic and Computation | 2012

A Relevance-theoretic Framework for Constructing and Deconstructing Enthymemes

Elizabeth Black; Anthony Hunter

In most proposals for logic-based models of argumentation dialogues between agents, the arguments exchanged are logical arguments of the form where Φ is a set of formulae (called the support) and α is a formula (called the claim) such that Φ is consistent and Φ entails α. However, arguments presented by real-world agents do not normally fit the mould of being logical arguments. They are normally enthymemes, and so they only explicitly represent some of the premises for entailing their claim and/or they do not explicitly state their claim. For example, for a claim that ‘you need an umbrella today’, a husband may give his wife the premise ‘the weather report predicts rain’. Clearly, the premise does not entail the claim, but it is easy for the wife to identify the assumed knowledge used by the husband in order to reconstruct the intended argument correctly (i.e. ‘if the weather report predicts rain, then you need an umbrella’). Whilst humans are constantly handling examples like this, proposals for logic-based formalizations of the process remain underdeveloped. In this article, we present a logic-based framework for handling enthymemes, some design features of which are influenced by aspects of relevance theory (proposed by Sperber and Wilson). In particular, we use the ideas of maximizing cognitive effect and minimizing cognitive effort in order to enable a proponent of an intended logical argument to construct an enthymeme appropriate for the intended recipient, and for the intended recipient to deconstruct the intended logical argument from the enthymeme. We relate our framework back to Sperber andWilsons relevance theory via some formal properties.


Springer Berlin Heidelberg | 2014

Automated planning of simple persuasion dialogues

Elizabeth Black; Amanda Coles; Sara Bernardini

We take a simple form of non-adversarial persuasion dialogue in which one participant (the persuader) aims to convince the other (the responder) to accept the topic of the dialogue by asserting sets of beliefs. The responder replies honestly to indicate whether it finds the topic to be acceptable (we make no prescription as to what formalism and semantics must be used for this, only assuming some function for determining acceptable beliefs from a logical knowledge base). Our persuader has a model of the responder, which assigns probabilities to sets of beliefs, representing the likelihood that each set is the responder’s actual beliefs. The beliefs the persuader chooses to assert and the order in which it asserts them (i.e. its strategy) can impact on the success of the dialogue and the success of a particular strategy cannot generally be guaranteed (because of the uncertainty over the responder’s beliefs). We define our persuasion dialogue as a classical planning problem, which can then be solved by an automated planner to generate a strategy that maximises the chance of success given the persuader’s model of the responder; this allows us to exploit the power of existing automated planners, which have been shown to be efficient in many complex domains. We provide preliminary results that demonstrate how the efficiency of our approach scales with the number of beliefs.


International Workshop on Theorie and Applications of Formal Argumentation | 2015

Reasons and Options for Updating an Opponent Model in Persuasion Dialogues

Elizabeth Black; Anthony Hunter

Dialogical argumentation allows agents to interact by constructing and evaluating arguments through a dialogue. Numerous proposals have been made for protocols for dialogical argumentation, and recently there is interest in developing better strategies for agents to improve their own outcomes from the interaction by using an opponent model to guide their strategic choices. However, there is a lack of clear formal reasons for why or how such a model might be useful, or how it can be maintained. In this paper, we consider a simple type of persuasion dialogue, investigate options for using and updating an opponent model, and identify conditions under which such use of a model is beneficial.


european conference on artificial intelligence | 2012

Executable logic for dialogical argumentation

Elizabeth Black; Anthony Hunter

Argumentation between agents through dialogue is an important cognitive activity. There have been a number of proposals for formalizing dialogical argumentation. However, each proposal involves a number of quite complex definitions, and there is significant diversity in the way different proposals define similar features. This complexity and diversity has hindered analysis and comparison of the space of proposals. To address this, we present a general approach to defining a wide variety of systems for dialogical argumentation. Our solution is to use an executable logic to specify individual systems for dialogical argumentation. This means we have a common language for specifying a wide range of systems, we can compare systems in terms of a range of standard properties, we can identify interesting classes of system, and we can execute the specification of each system to analyse it empirically.


ArgMAS'10 Proceedings of the 7th international conference on Argumentation in Multi-Agent Systems | 2010

Agreeing what to do

Elizabeth Black; Katie Atkinson

When deliberating about what to do, an autonomous agent must generate and consider the relative pros and cons of the different options. The situation becomes even more complicated when an agent is involved in a joint deliberation, as each agent will have its own preferred outcome which may change as new information is received from the other agents involved in the deliberation. We present an argumentation-based dialogue system that allows agents to come to an agreement on how to act in order to achieve a joint goal. The dialogue strategy that we define ensures that any agreement reached is acceptable to each agent, but does not necessarily demand that the agents resolve or share their differing preferences. We give properties of our system and discuss possible extensions. ACM Category: I.2.11 Multiagent systems. General terms: Theory.


International Workshop on Computational Logic and Multi-Agent Systems | 2014

Automated Planning of Simple Persuasion Dialogues

Elizabeth Black; Amanda Coles; Sara Bernardini

We take a simple form of non-adversarial persuasion dialogue in which one participant (the persuader) aims to convince the other (the responder) to accept the topic of the dialogue by asserting sets of beliefs. The responder replies honestly to indicate whether it finds the topic to be acceptable (we make no prescription as to what formalism and semantics must be used for this, only assuming some function for determining acceptable beliefs from a logical knowledge base). Our persuader has a model of the responder, which assigns probabilities to sets of beliefs, representing the likelihood that each set is the responder’s actual beliefs. The beliefs the persuader chooses to assert and the order in which it asserts them (i.e. its strategy) can impact on the success of the dialogue and the success of a particular strategy cannot generally be guaranteed (because of the uncertainty over the responder’s beliefs). We define our persuasion dialogue as a classical planning problem, which can then be solved by an automated planner to generate a strategy that maximises the chance of success given the persuader’s model of the responder; this allows us to exploit the power of existing automated planners, which have been shown to be efficient in many complex domains. We provide preliminary results that demonstrate how the efficiency of our approach scales with the number of beliefs.


TAFA'11 Proceedings of the First international conference on Theory and Applications of Formal Argumentation | 2011

An implemented dialogue system for inquiry and persuasion

Luke Riley; Katie Atkinson; Terry R. Payne; Elizabeth Black

In this paper, we present an implemented system that enables autonomous agents to engage in dialogues that involve inquiries embedded within a process of practical reasoning. The implementation builds upon an existing formal model of value-based argumentation, which has itself been extended to permit a wider range of arguments to be expressed. We present extensions to the formal underlying theory used for the dialogue system, as well as the implementation itself. We demonstrate the use of the system through a particular case study. We discuss a number of interesting issues that have arisen from the implementation and the experimental avenues that this test-bed will enable us to pursue.


Studies in health technology and informatics | 2004

Modelling clinical goals: a corpus of examples and a tentative ontology.

John Fox; Alyssa Alabassi; Elizabeth Black; Chris Nicholas Hurt; Tony Rose

Knowledge of clinical goals and the means to achieve them are either not represented in most current guideline representation systems or are encoded procedurally (e.g. as clinical algorithms, condition-action rules). There would be a number of major benefits if guideline enactment systems could reason explicitly about clinical objectives (e.g. whether a goal has been successfully achieved or not, whether it is consistent with prevailing conditions, or how the system should adapt to circumstances where a recommended action has failed to achieve the intended result). Our own guideline specification language, PROforma, includes a simple goal construct to address this need, but the interpretation is unsatisfactory in current enactment engines, and goals have yet to be included in the language semantics. This paper discusses some of the challenges involved in developing an explicit, declarative formalism for goals. As part of this, we report on a study we have undertaken which has identified over 200 goals in the routine management of breast cancer, and outline a tentative formal structure for this corpus.

Collaboration


Dive into the Elizabeth Black's collaboration.

Top Co-Authors

Avatar

Michael Luck

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Hunter

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Fox

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge