Hugh Glaser
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hugh Glaser.
IEEE Intelligent Systems | 2012
Nigel Shadbolt; Kieron O'Hara; Tim Berners-Lee; Nicholas Gibbins; Hugh Glaser; Wendy Hall; m.c. schraefel
A project to extract value from open government data contributes to the population of the linked data Web with high-value data of good provenance.
international world wide web conferences | 2004
m.c. schraefel; Nigel Shadbolt; Nicholas Gibbins; Stephen Harris; Hugh Glaser
We present a Semantic Web application that we callCS AKTive Space. The application exploits a wide range of semantically heterogeneousand distributed content relating to Computer Science research in theUK. This content is gathered on a continuous basis using a variety of methods including harvesting and scraping as well as adopting a range models for content acquisition. The content currently comprises aroundten million RDF triples and we have developed storage, retrieval andmaintenance methods to support its management. The content is mediated through an ontology constructed for the application domainand incorporates components from other published ontologies. CS AKTive Spacesupports the exploration of patterns and implications inherent in the content and exploits a variety of visualisations and multi dimensional representations. Knowledge services supported in the applicationinclude investigating communities of practice: who is working, researching or publishing with whom. This work illustrates a number ofsubstantial challenges for the Semantic Web. These include problems of referential integrity, tractable inference and interaction support. Wereview our approaches to these issues and discuss relevant related work.
edbt icdt workshops | 2010
Gianluca Correndo; Manuel Salvadores; Ian Millard; Hugh Glaser; Nigel Shadbolt
There has been lately an increased activity of publishing structured data in RDF due to the activity of the Linked Data community. The presence on the Web of such a huge information cloud, ranging from academic to geographic to gene related information, poses a great challenge when it comes to reconcile heterogeneous schemas adopted by data publishers. For several years, the Semantic Web community has been developing algorithms for aligning data models (ontologies). Nevertheless, exploiting such ontology alignments for achieving data integration is still an under supported research topic. The semantics of ontology alignments, often defined over a logical frameworks, implies a reasoning step over huge amounts of data, that is often hard to implement and rarely scales on Web dimensions. This paper presents an algorithm for achieving RDF data mediation based on SPARQL query rewriting. The approach is based on the encoding of rewriting rules for RDF patterns that constitute part of the structure of a SPARQL query.
european semantic web conference | 2008
Hugh Glaser; Ian Millard; Afraz Jaffri
RKB Explorer is a Semantic Web application that is able to present unified views of a significant number of heterogeneous data sources. We have developed an underlying information infrastructure which is mediated by ontologies and consists of many independent triple-stores, each publicly available through both SPARQL endpoints and resolvable URIs. To realise this synergy of disparate information sources, we have deployed tools to identify co-referent URIs, and devised an architecture to allow the information to be represented and used. This paper provides a brief overview of the system including the underlying infrastructure, and a number of associated tools for both knowledge acquisition and publishing.
knowledge acquisition, modeling and management | 2002
Harith Alani; Srinandan Dasmahapatra; Nicholas Gibbins; Hugh Glaser; Steve Harris; Yannis Kalfoglou; Kieron O'Hara; Nigel Shadbolt
The diversity and distributed nature of the resources available in the semantic web poses significant challenges when these are used to help automatically build an ontology. One persistent and pervasive problem is that of the resolution or elimination of coreference that arises when more than one identifier is used to refer to the same resource. Tackling this problem is crucial for the referential integrity, and subsequently the quality of results, of any ontology-based knowledge service. We have built a coreference management service to be used alongside the population and maintenance of an ontology. An ontology based knowledge service that identifies communities of practice (CoPs) is also used to maintain the heuristics used in the coreference management system. This approach is currently being applied in a large scale experiment harvesting resources from various UK computer science departments with the aim of building a large, generic web-accessible ontology.
Software - Practice and Experience | 1994
Pieter H. Hartel; Hugh Glaser; John Wild
A system based on the notion of a flow graph is used to specify formally and to implement a compiler for a lazy functional language. The compiler takes a simple functional language as input and generates C. The generated C program can then be compiled, and loaded with an extensive run‐time system to provide the facility to experiment with different analysis techniques. The compiler provides a single, unified, efficient, formal framework for all the analysis and synthesis phases, including the generation of C. Many of the standard techniques, such as strictness and boxing analyses, have been included.
Information Processing Letters | 1998
Abdellah Salhi; Hugh Glaser; David De Roure
We report on a parallel implementation of a tool for symbolic regression, the algorithmic mechanism of which is based on genetic programming, and communication is handled using MPI. The implementation relies on a random islands model (RIM), which combines both the conventional islands model where migration of individuals between islands occurs periodically and niching where no migration takes place. The system was designed so that the algorithm is synergistic with parallel/distributed architectures, and works to make use of processor time and minimum use of network bandwidth without complicating the sequential algorithm significantly. Results on an IBM SP2 are included.
web intelligence, mining and semantics | 2011
Temitope Omitola; Landong Zuo; Christopher Gutteridge; Ian Millard; Hugh Glaser; Nicholas Gibbins; Nigel Shadbolt
In the open world of the (Semantic) web, a world where increasingly diverse materials from disparate sources of different qualities are being made available, an automatic mechanism for the provision of provenance information of these sources is needed. This paper describes voidp, a provenance extension for the void vocabulary, that allows data publishers to specify the provenance relationships of their data. We enumerate voidps classes and properties, and describe a use case scenario. A wider uptake of voidp by dataset publishers will allow data consuming tools to take advantage of these metadata providing consumers with the origin, i.e., the provenance, of what is being consumed.
Lecture Notes in Computer Science | 2004
Hugh Glaser; Harith Alani; Les Carr; Sam Chapman; Fabio Ciravegna; Alexiei Dingli; Nicholas Gibbins; Stephen Harris; m.c. schraefel; Nigel Shadbolt
In this paper we reflect on the lessons learned from deploying the award winning [1] Semantic Web application, CS AKTiveSpace. We look at issues in service orientation and modularisation, harvesting, and interaction design for supporting this 10million-triple-based application. We consider next steps for the application, based on these lessons, and propose a strategy for expanding and improving the services afforded by the application.
Software - Practice and Experience | 1987
Hugh Glaser; P. Thompson
This paper describes a variation on the reference count method of garbage collection which can be particularly effective for the implemention of modern, truly functional languages. This method gives a high degree of control over the collection process and an increase in efficiency for many systems, particularly those with small, fast memories local to the central processor, such as a stack or an addressable cache.