Nigel Shadbolt
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nigel Shadbolt.
Autonomous Agents and Multi-Agent Systems | 2006
Trung Dong Huynh; Nicholas R. Jennings; Nigel Shadbolt
Trust and reputation are central to effective interactions in open multi-agent systems (MAS) in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. This openness means existing trust and reputation models cannot readily be used since their performance suffers when there are various (unforseen) changes in the environment. To this end, this paper presents FIRE, a trust and reputation model that integrates a number of information sources to produce a comprehensive assessment of an agent’s likely performance in open systems. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation to provide trust metrics in most circumstances. FIRE is empirically evaluated and is shown to help agents gain better utility (by effectively selecting appropriate interaction partners) than our benchmarks in a variety of agent populations. It is also shown that FIRE is able to effectively respond to changes that occur in an agent’s environment.
ACM Transactions on Information Systems | 2004
Stuart E. Middleton; Nigel Shadbolt; David De Roure
We explore a novel ontological approach to user profiling within recommender systems, working on the problem of recommending on-line academic research papers. Our two experimental systems, Quickstep and Foxtrot, create user profiles from unobtrusively monitored behaviour and relevance feedback, representing the profiles in terms of a research paper topic ontology. A novel profile visualization approach is taken to acquire profile feedback. Research papers are classified using ontological classes and collaborative recommendation algorithms used to recommend papers seen by similar people on their current topics of interest. Two small-scale experiments, with 24 subjects over 3 months, and a large-scale experiment, with 260 subjects over an academic year, are conducted to evaluate different aspects of our approach. Ontological inference is shown to improve user profiling, external ontological knowledge used to successfully bootstrap a recommender system and profile visualization employed to improve profiling accuracy. The overall performance of our ontological recommender systems are also presented and favourably compared to other systems in the literature.
IEEE Intelligent Systems | 2003
Harith Alani; Sanghee Kim; David E. Millard; Mark J. Weal; Wendy Hall; Paul H. Lewis; Nigel Shadbolt
To bring the Semantic Web to life and provide advanced knowledge services, we need efficient ways to access and extract knowledge from Web documents. Although Web page annotations could facilitate such knowledge gathering, annotations are rare and will probably never be rich or detailed enough to cover all the knowledge these documents contain. Manual annotation is impractical and unscalable, and automatic annotation tools remain largely undeveloped. Specialized knowledge services therefore require tools that can search and extract specific knowledge directly from unstructured text on the Web, guided by an ontology that details what type of knowledge to harvest. An ontology uses concepts and relations to classify domain knowledge. Other researchers have used ontologies to support knowledge extraction, but few have explored their full potential in this domain. The paper considers the Artequakt project which links a knowledge extraction tool with an ontology to achieve continuous knowledge support and guide information extraction. The extraction tool searches online documents and extracts knowledge that matches the given classification structure. It provides this knowledge in a machine-readable format that will be automatically maintained in a knowledge base (KB). Knowledge extraction is further enhanced using a lexicon-based term expansion mechanism that provides extended ontology terminology.
Artificial Intelligence | 2003
Xudong Luo; Nicholas R. Jennings; Nigel Shadbolt; Ho-fung Leung; Jimmy Ho-Man Lee
This paper develops a fuzzy constraint based model for bilateral multi-issue negotiation in trading environments. In particular, we are concerned with the principled negotiation approach in which agents seek to strike a fair deal for both parties, but which, nevertheless, maximises their own payoff. Thus, there are elements of both competition and cooperation in the negotiation (hence semi-competitive environments). One of the key intuitions of the approach is that there is often more than one option that can satisfy the interests of both parties. So, if the opponent cannot accept an offer then the proponent should endeavour to find an alternative that is equally acceptable to it, but more acceptable to the opponent. That is, the agent should make a trade-off. Only if such a trade-off is not possible should the agent make a concession. Against this background, our model ensures the agents reach a deal that is fair (Pareto-optimal) for both parties if such a solution exists. Moreover, this is achieved by minimising the amount of private information that is revealed. The model uses prioritised fuzzy constraints to represent trade-offs between the different possible values of the negotiation issues and to indicate how concessions should be made when they are necessary. Also by using constraints to express negotiation proposals, the model can cover the negotiation space more efficiently since each exchange covers a region rather than a single point (which is what most existing models deal with). In addition, by incorporating the notion of a reward into our negotiation model, the agents can sometimes reach agreements that would not otherwise be possible.
web science | 2006
Tim Berners-Lee; Wendy Hall; James A. Hendler; Kieron O'Hara; Nigel Shadbolt; Daniel J. Weitzner
This text sets out a series of approaches to the analysis and synthesis of the World Wide Web, and other web-like information structures. A comprehensive set of research questions is outlined, together with a sub-disciplinary breakdown, emphasising the multi-faceted nature of the Web, and the multi-disciplinary nature of its study and development. These questions and approaches together set out an agenda for Web Science, the science of decentralised information systems. Web Science is required both as a way to understand the Web, and as a way to focus its development on key communicational and representational requirements. The text surveys central engineering issues, such as the development of the Semantic Web, Web services and P2P. Analytic approaches to discover the Webs topology, or its graph-like structures, are examined. Finally, the Web as a technology is essentially socially embedded; therefore various issues and requirements for Web use and governance are also reviewed.
web science | 2008
James A. Hendler; Nigel Shadbolt; Wendy Hall; Tim Berners-Lee; Daniel J. Weitzner
The Web must be studied as an entity in its own right to ensure it keeps flourishing and prevent unanticipated social effects.
Human Factors | 1998
Robert R. Hoffman; Beth Crandall; Nigel Shadbolt
The Critical Decision Method (CDM) is an approach to cognitive task analysis. The method involves multiple-pass event retrospection guided by probe questions. The CDM has been used in the elicitation of expert knowledge in diverse domains and for applications including system development and instructional design. The CDM research illustrates the sorts of knowledge representation products that can arise from cognitive task analysis (e.g., Situation Assessment Records, time lines, decision requirements). The research also shows how one can approach methodological issues surrounding cognitive task analysis, including questions about data quality and method reliability, efficiency, and utility. As cognitive task analysis is used more widely in the elicitation, preservation, and dissemination of expert knowledge and is used more widely as the basis for the design of complex cognitive systems, and as projects move into even more field applications and real-world settings, the issues we discuss become increasingly critical.
international conference on knowledge capture | 2001
Stuart E. Middleton; David De Roure; Nigel Shadbolt
Tools for filtering the World Wide Web exist, but they are hampered by the difficulty of capturing user preferences in such a dynamic environment. We explore the acquisition of user profiles by unobtrusive monitoring of browsing behaviour and application of supervised machine-learning techniques coupled with an ontological representation to extract user preferences. A multi-class approach to paper classification is used, allowing the paper topic taxonomy to be utilised during profile construction. The Quickstep recommender system is presented and two empirical studies evaluate it in a real work setting, measuring the effectiveness of using a hierarchical topic ontology compared with an extendable flat list.
Journal of Web Semantics | 2004
Nicholas Gibbins; Stephen Harris; Nigel Shadbolt
Abstract The Web Services world consists of loosely-coupled distributed systems which adapt to changes by the use of service descriptions that enable ad-hoc, opportunistic service discovery and reuse. At present, these service descriptions are semantically impoverished, being concerned with describing the functional signature of the services rather than characterising their meaning. In the Semantic Web community, the DAML Services effort attempts to rectify this by providing a more expressive way of describing Web Services using ontologies. However, this approach does not separate the domain-neutral communicative intent of a message (considered in terms of speech acts) from its domain-specific content, unlike similar developments from the multi-agent systems community. We describe our experiences of designing and building an ontologically motivated Web Services system for situational awareness and information triage in a simulated humanitarian aid scenario. In particular, we discuss the merits of using techniques from the multi-agent systems community for separating the intentional force of messages from their content, and the implementation of these techniques within the DAML Services model.
IEEE Intelligent Systems | 2012
Nigel Shadbolt; Kieron O'Hara; Tim Berners-Lee; Nicholas Gibbins; Hugh Glaser; Wendy Hall; m.c. schraefel
A project to extract value from open government data contributes to the population of the linked data Web with high-value data of good provenance.