Paul Doran
University of Liverpool
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Doran.
conference on information and knowledge management | 2007
Paul Doran; Valentina A. M. Tamma; Luigi Iannone
Problems resulting from the management of shared, distributed knowledge has led to ontologies being employed as a solution, in order to effectively integrate information across applications. This is dependent on having ways to share and reuse existing ontologies; with the increased availability of ontologies on the web, some of which include thousands of concepts, novel and more efficient methods for reuse are being devised. One possible way to achieve efficient ontology reuse is through the process of ontology module extraction. A novel approach to ontology module extraction is presented that aims to achieve more efficient reuse of very large ontologies; the motivation is drawn from an Ontology Engineering perspective. This paper provides a definition of ontology modules from the reuse perspective and an approach to module extraction based on such a definition. An abstract graph model for module extraction has been defined, along with a module extraction algorithm. The novel contribution of this paper is a module extraction algorithm that is independent of the language in which the ontology is expressed. This has been implemented in ModTool; a tool that produces ontology modules via extraction. Experiments were conducted to compare ModTool to other modularisation methods.
international semantic web conference | 2009
Ignazio Palmisano; Valentina A. M. Tamma; Terry R. Payne; Paul Doran
Ontology Modularization techniques identify coherent and often reusable regions within an ontology. The ability to identify such modules, thus potentially reducing the size or complexity of an ontology for a given task or set of concepts is increasingly important in the Semantic Web as domain ontologies increase in terms of size, complexity and expressivity. To date, many techniques have been developed, but evaluation of the results of these techniques is sketchy and somewhat ad hoc. Theoretical properties of modularization algorithms have only been studied in a small number of cases. This paper presents an empirical analysis of a number of modularization techniques, and the modules they identify over a number of diverse ontologies, by utilizing objective, task-oriented measures to evaluate the fitness of the modules for a number of statistical classification problems.
international conference on knowledge capture | 2009
Paul Doran; Valentina A. M. Tamma; Terry R. Payne; Ignazio Palmisano
Ontology modularization has received growing interest from the research community lately, since it supports tasks such as ontology design/reuse and knowledge selection and integration. Most of the research efforts have concentrated on approaches to extract modules, or generate partitions from an ontology. However these approaches are influenced by different definitions of ontology modularization and thus tend to vary w.r.t. the concepts and properties in the ontology that should define the module, and on the characteristics that modules should exhibit, which often depend on the task for which the modularization process is performed. This diversity of approaches makes the comparative evaluation of the output of different modularization processes hard to perform. In this paper, we propose an entropy inspired measure for modularization, Integrated Ontology Entropy, that approximates the information content of modules, and hence provides a profile for the module generated. This measure is independent of the modularization technique used, and is calculated as a function of the number of edges connecting the named concepts in the ontology, when a graph representation of the ontology is utilized. In the paper we apply this measure to different modularization techniques and we empirically show how the measure captures different characteristics of modules, such as the degree of redundancy and the level of connectedness.
web intelligence | 2008
Paul Doran; Valentina A. M. Tamma; Ignazio Palmisano; Terry R. Payne; Luigi Iannone
In this paper we therefore propose a reformulation of the entropy metric to evaluate the amount of information carried by both the ontology structure, and also by the language elements (i.e. the semantics associated with the edges in the ontological graph). To evaluate this approach, the reformulated metric is empirically compared to Calemt & Daemis original entropy metric, for a variety of different sized modules. The results suggest that not only can entropy differentiate between structurally different modules of the same size, but that our improved entropy metric provides a finer grain differentiation than the original entropy metric.
web intelligence | 2008
Ignazio Palmisano; Valentina A. M. Tamma; Luigi Iannone; Terry R. Payne; Paul Doran
Changes in an ontology may have a disruptive impact on any system using it. This impact may depend on structural changes such as introduction or removal of concept definitions, or it may be related to a change in the expected performance of the reasoning tasks. As the number of systems using ontologies is expected to increase, and given the open nature of the semantic Web, introduction of new ontologies and modifications to existing ones are to be expected. Dynamically handling such changes, without requiring human intervention, becomes crucial. This paper presents a framework that isolates groups of related axioms in an OWL ontology, so that a change in one or more axioms can be automatically localised to a part of the ontology.
ArgMAS'09 Proceedings of the 6th international conference on Argumentation in Multi-Agent Systems | 2009
Paul Doran; Valentina A. M. Tamma; Terry R. Payne; Ignazio Palmisano
Efficient agent communication in open and dynamic environments relies on the agents ability to reach a mutual understanding over message exchanges. Such environments are characterized by the existence of heterogeneous agents that commit to different ontologies, with no prior assumptions regarding the use of shared vocabularies. Various approaches have therefore considered how mutually acceptable mappings may be determined dynamically between agents through negotiation. In particular, this paper focusses on the meaning based negotiation approach, proposed by Laera et al [1], that makes use of argumentation in order to select a set of mappings that is deemed acceptable by both agents. However, this process can be highly complex, reaching
international conference on knowledge capture | 2007
Mathieu d'Aquin; Paul Doran; Enrico Motta; Valentina A. M. Tamma
\Pi_{2}^{(p)}
international joint conference on artificial intelligence | 2009
Paul Doran; Valentina A. M. Tamma; Terry R. Payne; Ignazio Palmisano
-complete. Whilst it is non-trivial to reduce this complexity, we have explored the use of ontology modularization as a means of reducing the space of possible concepts over which the agents have to negotiate. In this paper, we propose an approach that combines modularization with argumentation to generate focused domains of discourse to facilitate communication. We empirically demonstrate that we can not only reduce the number of alignments required to reach consensus by an average of 75%, but that in 41% of cases, we can identify those agents that would not be able to fully satisfy the request, without the need for negotiation.
WoMO | 2008
Paul Doran; Ignazio Palmisano; Valentina A. M. Tamma
adaptive agents and multi agents systems | 2009
Paul Doran; Valentina A. M. Tamma; Ignazio Palmisano; Terry R. Payne