Alice Toniolo
University of Aberdeen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alice Toniolo.
information processing in sensor networks | 2015
Robin Wentao Ouyang; Lance M. Kaplan; Paul Martin; Alice Toniolo; Mani B. Srivastava; Timothy J. Norman
Information about quantitative characteristics in local businesses and services, such as the number of people waiting in line in a cafe and the number of available fitness machines in a gym, is important for informed decision, crowd management and event detection. In this paper, we investigate the potential of leveraging crowds as sensors to report such quantitative characteristics and investigate how to recover the true quantity values from noisy crowdsourced information. Through experiments, we find that crowd sensors have both bias and variance in quantity sensing, and task difficulties impact the sensing accuracy. Based on these findings, we propose an unsupervised probabilistic model to jointly assess task difficulties, ability of crowd sensors and true quantity values. Our model differs from existing categorical truth finding models as ours is specifically designed to tackle quantitative truth. In addition to devising an efficient model inference algorithm in a batch mode, we also design an even faster online version for handling streaming data. Experimental results in various scenarios demonstrate the effectiveness of our model.
european conference on artificial intelligence | 2012
Alice Toniolo; Timothy J. Norman; Katia P. Sycara
Collaborative decision making among agents in a team is a complex activity, and tasks to achieve individual objectives may conflict in a team context. A number of argumentation-based models have been proposed to address the problem, the rationale being that the revelation of background information and constraints can aid in the discovery and resolution of conflicts. To date, however, no empirical studies have been conducted to substantiate these claims. In this paper, we discuss a model, grounded on argumentation schemes, that captures potential conflicts due to scheduling and causality constraints, and individual goals and norms. We evaluate this model in complex collaborative planning problems and show that such a model facilitates the sharing of relevant information pertaining to plan, goal and normative conflicts. Further, we show that this focussed information sharing leads to more effective conflict resolution, particularly in the most challenging problems.
IEEE Transactions on Parallel and Distributed Systems | 2016
Robin Wentao Ouyang; Lance M. Kaplan; Alice Toniolo; Mani B. Srivastava; Timothy J. Norman
To enable reliable crowdsourcing applications, it is of great importance to develop algorithms that can automatically discover the truths from possibly noisy and conflicting claims provided by various information sources. In order to handle crowdsourcing applications involving big or streaming data, a desirable truth discovery algorithm should not only be effective, but also be scalable. However, with respect to quantitative crowdsourcing applications such as object counting and percentage annotation, existing truth discovery algorithms are not simultaneously effective and scalable. They either address truth discovery in categorical crowdsourcing or perform batch processing that does not scale. In this paper, we propose new parallel and streaming truth discovery algorithms for quantitative crowdsourcing applications. Through extensive experiments on real-world and synthetic datasets, we demonstrate that 1) both of them are quite effective, 2) the parallel algorithm can efficiently perform truth discovery on large datasets, and 3) the streaming algorithm processes data incrementally, and it can efficiently perform truth discovery both on large datasets and in data streams.
computational models of argument | 2014
Alice Toniolo; Timothy Dropps; Wentao Robin Ouyang; John A. Allen; Timothy J. Norman; Nir Oren; Mani B. Srivastava; Paul Sullivan
We present the CISpaces framework, a collaborative virtual space for intelligence analysts for the elaboration of information to explain a situation. CISpaces supports the analysis of conflicting information in collaboration exploiting argumentation schemes to structure and share analyses, crowd-sourcing to collect information and provenance to establish the credibility of hypotheses.
pacific rim international conference on multi-agents | 2011
Alice Toniolo; Timothy J. Norman; Katia P. Sycara
We address the collaborative planning problem among agents where they have different objectives and norms. In this context, agreeing on the best course of action to adopt represents a significant challenge. Concurrent actions and causal plan-constraints may lead to conflicts of opinion on what to do. Moreover, individual norms can constrain agent behaviour. We propose an argumentation-based model for deliberative dialogues based on argumentation schemes. This model facilitates agreements about joint plans by enriching the quality of the dialogue through the exchange of relevant information about plan commitments and norms.
international conference agreement technologies | 2013
Federico Cerutti; Alice Toniolo; Nir Oren; Timothy J. Norman
Computational trust mechanisms aim to produce a trust rating from both direct and indirect information about agents behaviour. Josangs Subjective Logic has been widely adopted as the core of such systems via its fusion and discount operators. Recently we proposed an operator for discounting opinions based on geometrical properties, and, continuing this line of investigation, this paper describes a new geometry based fusion operator. We evaluate this fusion operator together with our geometric discount operator in the context of a trust system, and show that our operators outperform those originally described by Josang. A core advantage of our work is that these operators can be used without modifying the remainder of the trust and reputation system.
Argument & Computation | 2016
Douglas Walton; Alice Toniolo; Timothy J. Norman
This research was partially supported by Social Sciences and Humanities Research Council of Canada Insight Grant 435-2012-0104. This research was also partially supported by the award made by the RCUK Digital Economy program to the dot.rural Digital Economy Hub at the University of Aberdeen; award ref. : EP/G066051/1. Further refinements of this work were supported by the SICSA PECE scheme.
Archive | 2018
Gideon Ogunniye; Alice Toniolo; Nir Oren
The conclusions drawn from a dialogue depend both on the content of the arguments, and the level of trust placed in the arguments and the entity advancing them. In this paper, we describe a framework for dialogue where such trust forms the basis for expressing preferences between arguments, and in turn, for computing conclusions of the dialogue. Our framework contains object and meta-level arguments, and uses ASPIC+ to represent arguments, while argument schemes capture meta-level arguments about trust and preferences.
4th International Workshop on Theory and Applications of Formal Argumentation, TAFA 2017: (TAFA 2017) | 2017
Alice Toniolo; Timothy J. Norman; Nir Oren
This paper seeks to better understand the links between human reasoning and preferred extensions as found within formal argumentation, especially in the context of uncertainty. The degree of believability of a conclusion may be associated with the number of preferred extensions in which the conclusion is credulously accepted. We are interested in whether people agree with this evaluation. A set of experiments with human participants is presented to investigate the validity of such an association. Our results show that people tend to agree with the outcome of a version of Thimm’s probabilistic semantics in purely qualitative domains as well as in domains in which conclusions express event likelihood. Furthermore, we are able to characterise this behaviour: the heuristics employed by people in understanding preferred extensions are similar to those employed in understanding probabilities.
4th International Workshop on Theory and Applications of Formal Argumentation, TAFA 2017: (TAFA 2017) | 2017
Gideon Ogunniye; Alice Toniolo; Nir Oren
In human interactions, trust is regularly updated during a discussion. For example, if someone is caught lying, any further utterances they make will be discounted, until trust is regained. This paper seeks to model such behaviour by introducing a dialogue game which operates over several iterations, with trust updates occurring at the end of each iteration. In turn, trust changes are computed based on intuitive properties, captured through three rules. By representing agent knowledge within a preference-based argumentation framework, we demonstrate how trust can change over the course of a dialogue.