Heather S. Packer
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Heather S. Packer.
Proceedings of the 1st international workshop on Multimodal crowd sensing | 2012
Heather S. Packer; Sina Samangooei; Jonathon S. Hare; Nicholas Gibbins; Paul H. Lewis
Twitter is a popular tool for publishing potentially interesting information about peoples opinions, experiences and news. Mobile devices allow people to publish tweets during real-time events. It is often difficult to identify the subject of a tweet because Twitter users often write using highly unstructured language with many typographical errors. Structured data related to entities can provide additional context to tweets. We propose an approach which associates tweets to a given event using query expansion and relationships defined on the Semantic Web, thus increasing the recall whilst maintaining or improving the precision of event detection. In this work, we investigate the usage of Twitter in discussing the Rock am Ring music festival. We aim to use prior knowledge of the festivals lineup to associate tweets with the bands playing at the festival. In order to evaluate the effectiveness of our approach, we compare the lifetime of the Twitter buzz surrounding an event to the actual programmed event, using Twitter users as social sensors.
human factors in computing systems | 2013
Max Van Kleek; Daniel Alexander Smith; Heather S. Packer; Jim Skinner; Nigel Shadbolt
The information processing capabilities of humans enable them to opportunistically draw and integrate knowledge from nearly any information source. However, the integration of digital, structured data from diverse sources remains difficult, due to problems of heterogeneity that arise when data modelled separately are brought together. In this paper, we present an investigation of the feasibility of extending Personal Information Management (PIM) tools to support lightweight, user-driven mixing of previously un-integrated data, with the objective of allowing users to take advantage of the emerging ecosystems of structured data currently becoming available. In this study, we conducted an exploratory, sequential, mixed-method investigation, starting with two pre-studies of the data integration needs and challenges, respectively, of Web-based data sources. Observations from these pre-studies led to DataPalette, an interface that introduced simple co-reference and group multi-path-selection mechanisms for working with terminologically and structurally heterogeneous data. Our lab study showed that participants readily understood the new interaction mechanisms which were introduced. Participants made more carefully justified decisions, even while weighing a greater number of factors, moreover expending less effort, during subjective-choice tasks when using DataPalette, than with a control set-up.
IEEE Transactions on Software Engineering | 2018
Luc Moreau; Belfrit Victor Batlajery; Trung Dong Huynh; Danius T. Michaelides; Heather S. Packer
prov-templateis a declarative approach that enables designers and programmers to design and generate provenance compatible with the prov standard of the World Wide Web Consortium. Designers specify the topology of the provenance to be generated by composing templates, which are provenance graphs containing variables, acting as placeholders for values. Programmers write programs that log values and package them up in sets of bindings, a data structure associating variables and values. An expansion algorithm generates instantiated provenance from templates and sets of bindings in any of the serialisation formats supported by prov. A quantitative evaluation shows that sets of bindings have a size that is typically 40 percent of that of expanded provenance templates and that the expansion algorithm is suitably tractable, operating in fractions of milliseconds for the type of templates surveyed in the article. Furthermore, the approach shows four significant software engineering benefits: separation of responsibilities, provenance maintenance, potential runtime checks and static analysis, and provenance consumption. The article gathers quantitative data and qualitative benefits descriptions from four different applications making use of prov-template. The system is implemented and released in the open-source library ProvToolbox for provenance processing.
Archive | 2014
Heather S. Packer; Laura Drăgan; Luc Moreau
The reputation of subject is a measure of a community’s opinion about that subject. A subject’s reputation plays a core role in communities within Collective Adaptive Systems (CAS) and can influence a community’s perception and their interactions with the subject. Their reputation can also affect computational activities within a system. While reputation is frequently used in CAS, there is a lack of agreed methods for its use, representation, and auditability. The aim of this chapter is to investigate key facets of an auditable reputation service for CAS, we contribute: Use cases for reputation and provenance in CAS, which are categorised into functional, auditable, privacy and security, and administrative; and a RESTful Reputation API, which allows users access to subject feedback and to access feedback reports and reputation measures.
international joint conference on artificial intelligence | 2011
Heather S. Packer; Nicholas Gibbins; Nicholas R. Jennings
Ontologies that evolve through use to support new domain tasks can grow extremely large. Moreover, large ontologies require more resources to use and have slower response times than small ones. To help address this problem, we present an on-line semantic forgetting algorithm that removes ontology fragments containing infrequently used or cheap to relearn concepts. We situate our algorithm in an extension of the widely used RoboCup Rescue platform, which provides simulated tasks to agents. We show that our agents send fewer messages and complete more tasks, and thus achieve a greater degree of success, than other state-of-the-art approaches.
human factors in computing systems | 2014
Heather S. Packer; Gustavo Buzogany; Daniel Alexander Smith; Laura Dragan; Max Van Kleek; Nigel Shadbolt
The many and varied personal activity trackers on the market have the potential to provide unprecedented detail and insight on our everyday activities. However, effective use and interpretation of data from them can be challenging due to common issues. Such issues include false readings due to sensing approaches taken, or missing data arising from a number of different causes. In order to understand user perceptions on this topic, we performed a preliminary survey, which found that users desired the ability to annotate, retroactively repair, and compare their data. Based on insights from this survey, we designed a direct-manipulation interface permitting the consolidated annotation and revision of activity data from multiple devices. A pilot study of this interface found that users understood readily how to use the features offered, and valued the ability to edit, yet preserve the provenance of their data.
web intelligence | 2010
Heather S. Packer; Nicholas Gibbins; Nicholas R. Jennings
Collaborating agents require either prior agreement on the shared vocabularies that they use for communication, or some means of translating between their private ontologies. Thus, techniques that enable agents to build shared vocabularies allow them to share and learn new concepts, and are therefore beneficial when these concepts are required on multiple occasions. However, if this is not carried out in an effective manner then the performance of an agent may be adversely affected by the time required to infer over large augmented ontologies, so causing problems in time-critical scenarios such as search and rescue. In this paper, we present a new technique that enables agents to augment their ontology with carefully selected concepts into their ontology. We contextualise this generic approach in the domain of RoboCup Rescue. Specifically, we show, through empirical evaluation, that our approach saves more civilians, reduces the percentage of the city burnt, and spends the least amount of time accessing its ontology compared with other state of the art benchmark approaches.
international provenance and annotation workshop | 2014
Heather S. Packer; Luc Moreau
Disseminating provenance data to users can be challenging because of its technical content, and its potential scale and complexity. Textual narrative and supporting images can be used to improve a users understanding of provenance data. This early work aims to support the exploration of provenance data by allowing users to query provenance data with a provenance subject either an entity, activity or agent recorded in it.
acm conference on hypertext | 2012
Heather S. Packer; Ashley Smith; Paul H. Lewis
Automatic capture of life logging data can be extremely information rich, large and varied. Extracting a narrative from this data can be difficult because not all of the data is conducive to producing interesting narratives. Life logging data can be enriched by linking to the Semantic Web and narratives can be enriched with data extracted from Semantic Web knowledge stores. In this paper, we present MemoryBook which is a web interface that automatically generates narratives from life logging data in RDF form and from Semantic Web knowledge stores, and highlights maps and images which are associated with events in the narrative.
international semantic web conference | 2010
Heather S. Packer; Nicholas Gibbins; Nicholas R. Jennings
Ontologies underpin the semantic web; they define the concepts and their relationships contained in a data source. An increasing number of ontologies are available on-line, but an ontology that combines information from many different sources can grow extremely large. As an ontology grows larger, more resources are required to use it, and its response time becomes slower. Thus, we present and evaluate an on-line approach that forgets fragments from an OWL ontology that are infrequently or no longer used, or are cheap to relearn, in terms of time and resources. In order to evaluate our approach, we situate it in a controlled simulation environment, RoboCup OWLRescue, which is an extension of the widely used RoboCup Rescue platform, which enables agents to build ontologies automatically based on the tasks they are required to perform. We benchmark our approach against other comparable techniques and show that agents using our approach spend less time forgetting concepts from their ontology, allowing them to spend more time deliberating their actions, to achieve a higher average score in the simulation environment.