Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suzanne M. Embury is active.

Publication


Featured researches published by Suzanne M. Embury.


international conference on software maintenance | 2004

Program slicing in the presence of database state

David Willmor; Suzanne M. Embury; Jianhua Shao

Program slicing has long been recognised as a valuable technique for supporting the software maintenance process. However, many programs operate over some kind of external state, as well as the internal program state. Arguably, the most significant form of external state is that used to store data associated with the application, for example, in a database management system. We propose an approach to supporting slicing over both program and database state, which requires the introduction of two new forms of data dependency into the standard program dependency graph. Our method expands the usefulness of program slicing techniques to the considerable number of database application programs that are being maintained within industry and science today.


Information Systems | 2013

Incrementally improving dataspaces based on user feedback

Khalid Belhajjame; Norman W. Paton; Suzanne M. Embury; Alvaro A. A. Fernandes; Cornelia Hedeler

One aspect of the vision of dataspaces has been articulated as providing various benefits of classical data integration with reduced up-front costs. In this paper, we present techniques that aim to support schema mapping specification through interaction with end users in a pay-as-you-go fashion. In particular, we show how schema mappings, that are obtained automatically using existing matching and mapping generation techniques, can be annotated with metrics estimating their fitness to user requirements using feedback on query results obtained from end users. Using the annotations computed on the basis of user feedback, and given user requirements in terms of precision and recall, we present a method for selecting the set of mappings that produce results meeting the stated requirements. In doing so, we cast mapping selection as an optimization problem. Feedback may reveal that the quality of schema mappings is poor. We show how mapping annotations can be used to support the derivation of better quality mappings from existing mappings through refinement. An evolutionary algorithm is used to efficiently and effectively explore the large space of mappings that can be obtained through refinement. User feedback can also be used to annotate the results of the queries that the user poses against an integration schema. We show how estimates for precision and recall can be computed for such queries. We also investigate the problem of propagating feedback about the results of (integration) queries down to the mappings used to populate the base relations in the integration schema.


international workshop on principles of software evolution | 2003

Business rule evolution and measures of business rule evolution

Liwen Lin; Suzanne M. Embury; Brian Warboys

There is an urgent industrial need to enforce the changes of business rules (BRs) to software systems quickly, reliably and economically. Unfortunately, evolving BRs in most existing software systems is both time-consuming and error-prone. In order to manage, control and improve BR evolution, it is necessary that the software evolution community comes to an understanding of the ways in which BRs are implemented and how BR evolution can be facilitated or hampered by the design of software systems. We suggest that new software metrics are needed to allow us to measure the characteristics of BR evolution and to help us to explore possible improvements in a systematic way. A suitable set of BR-related metrics help us to discover the root causes of the difficulties inherent in BR evolution, evaluate the success of proposed approaches to BR evolution and improve the BR evolution process as a whole.


edbt icdt workshops | 2012

Diagnosing faults in embedded queries in database applications

Muhammad Akhter Javid; Suzanne M. Embury

Diagnosing faults in embedded queries in database applications is a daunting process. When test cases fail, the traditional way of diagnosing faults is to follow possible execution paths, either mentally or step-by-step in a debugger, to locate the problematic area. The diagnosis problem becomes even harder when you have embedded language with quite different semantics and properties.n Our focus is on a specific problem: diagnosing failed test cases caused by embedded queries in database applications which are syntactically correct but semantically incorrect (i.e., they produce incomplete or incorrect results). Much research literature is available on database applications and databases but the diagnosis problem for embedded queries that cause failure of test cases has not been tackled.n We perform an experiment to see how far existing techniques could be useful in proposing a new technique for this problem. We identify the additional components that need to be developed to take us to a full solution and describe our tentative conclusions so far.


IEEE Transactions on Services Computing | 2014

Verification of Semantic Web Service Annotations Using Ontology-Based Partitioning

Khalid Belhajjame; Suzanne M. Embury; Norman W. Paton

Semantic annotation of web services has been proposed as a solution to the problem of discovering services to fit a particular need and reusing them appropriately. While there exist tools that assist human users in the annotation task, e.g., Radiant and Meteor-S, no semantic annotation proposal considers the problem of verifying the accuracy of the resulting annotations. Early evidence from workflow compatibility checking suggests that the proportion of annotations that contain some form of inaccuracy is high, and yet no tools exist to help annotators to test the results of their work systematically before they are deployed for public use. In this paper, we adapt techniques from conventional software testing to the verification of semantic annotations for web service input and output parameters. We present an algorithm for the testing process and discuss ways in which manual effort from the annotator during testing can be reduced. We also present two adequacy criteria for specifying test cases used as input for the testing process. These criteria are based on structural coverage of the domain ontology used for annotation. The results of an evaluation exercise, based on a collection of annotations for bioinformatics web services, show that defects can be successfully detected by the technique.


In: Advanced Query Processing, Volume 1: Issues and Trends. Springer; 2013. p. 305-341. | 2013

A Functional Model for Dataspace Management Systems

Cornelia Hedeler; Alvaro A. A. Fernandes; Khalid Belhajjame; Lu Mao; Chenjuan Guo; Norman W. Paton; Suzanne M. Embury

Dataspace management systems (DSMSs) hold the promise of pay-as-you-go data integration. We describe a comprehensive model of DSMS functionality using an algebraic style. We begin by characterizing a dataspace life cycle and highlighting opportunities for both automation and user-driven improvement techniques. Building on the observation that many of the techniques developed in model management are of use in data integration contexts as well, we briefly introduce the model management area and explain how previous work on both data integration and model management needs extending if the full dataspace life cycle is to be supported.We show that many model management operators already enable important functionalities (e.g., the merging of schemas, the composition of mappings, etc.) and formulate these capabilities in an algebraic structure, thereby giving rise to the notion of the core functionality of a DSMS as a many-sorted algebra. Given this view, we show how core tasks in the dataspace life cycle can be enacted by means of algebraic programs. An extended case study illustrates how such algebraic programs capture a challenging, practical scenario.


conference on current trends in theory and practice of informatics | 2016

Pay-as-you-go Data Integration: Experiences and Recurring Themes

Norman W. Paton; Khalid Belhajjame; Suzanne M. Embury; Alvaro A. A. Fernandes; Ruhaila Maskat

Data integration typically seeks to provide the illusion that data from multiple distributed sources comes from a single, well managed source. Providing this illusion in practice tends to involve the design of a global schema that captures the users data requirements, followed by manual with tool support construction of mappings between sources and the global schema. This overall approach can provide high quality integrations but at high cost, and tends to be unsuitable for areas with large numbers of rapidly changing sources, where users may be willing to cope with a less than perfect integration. Pay-as-you-go data integration has been proposed to overcome the need for costly manual data integration. Pay-as-you-go data integration tends to involve two steps. Initialisation: automatic creation of mappings generally of poor quality between sources. Improvement: the obtaining of feedback on some aspect of the integration, and the application of this feedback to revise the integration. There has been considerable research in this area over a ten year period. This paper reviews some experiences with pay-as-you-go data integration, providing a framework that can be used to compare or develop pay-as-you-go data integration techniques.


asian conference on intelligent information and database systems | 2013

Measuring data completeness for microbial genomics database

Nurul A. Emran; Suzanne M. Embury; Paolo Missier; Mohd Noor Mat Isa; Azah Kamilah Muda

Poor quality data such as data with missing values (or records) cause negative consequences in many application domains. An important aspect of data quality is completeness. One problem in data completeness is the problem of missing individuals in data sets. Within a data set, the individuals refer to the real world entities whose information is recorded. So far, in completeness studies however, there has been little discussion about how missing individuals are assessed. In this paper, we propose the notion of population-based completeness (PBC) that deals with the missing individuals problem, with the aim of investigating what is required to measure PBC and to identify what is needed to support PBC measurements in practice. This paper explores the need of PBC in the microbial genomics where real sample data sets retrieved from a microbial database called Comprehensive Microbial Resources are used (CMR).


Search Computing | 2010

Chapter 7: dataspaces

Cornelia Hedeler; Khalid Belhajjame; Norman W. Paton; Alessandro Campi; Alvaro A. A. Fernandes; Suzanne M. Embury

The vision of dataspaces is to provide various of the benefits of classical data integration, but with reduced up-front costs, combined with opportunities for incremental refinement, enabling a “pay as you go” approach. As such, dataspaces join a long stream of research activities that aim to build tools that simplify integrated access to distributed data. To address dataspace challenges, many different techniques may need to be considered: data integration from multiple sources, machine learning approaches to resolving schema heterogeneity, integration of structured and unstructured data, management of uncertainty, and query processing and optimization. Results that seek to realize the different visions exhibit considerable variety in their contexts, priorities and techniques. This chapter presents a classification of the key concepts in the area, encouraging the use of consistent terminology, and enabling a systematic comparison of proposals. This chapter also seeks to identify common and complementary ideas in the dataspace and search computing literatures, in so doing identifying opportunities for both areas and open issues for further research.


data and knowledge engineering | 2004

Algorithms for analysing related constraint business rules

Gaihua Fu; Jianhua Shao; Suzanne M. Embury; W. Alex Gray

Constraints represent a class of business rules that describe the conditions under which an organisation operates. It is common that organisations implement a large number of constraints in their supporting information systems. To remain competitive in todays ever-changing business environment, organisations are increasingly recognising the ability to evolve the implemented constraints timely and correctly. While many techniques have been proposed to assist constraint specification and enforcement in information systems, little has been done so far to help constraint evolution. In this paper, we introduce a form of constraint analysis that is particularly geared towards constraint evolution. More specifically, we propose several algorithms for determining which constraints collectively restrict a specified set of business objects, and we study their performance. Since the constraints contained in an information system are typically in large quantities and tend to be fragmented during implementation, this type of analysis is desirable and valuable in the process of their evolution.

Collaboration


Dive into the Suzanne M. Embury's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nurul A. Emran

Universiti Teknikal Malaysia Melaka

View shared research outputs
Top Co-Authors

Avatar

Nikolaos Konstantinou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Andy Brass

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge