Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy S. Bradbury is active.

Publication


Featured researches published by Jeremy S. Bradbury.


acm sigsoft workshop on self managed systems | 2004

A survey of self-management in dynamic software architecture specifications

Jeremy S. Bradbury; James R. Cordy; Juergen Dingel; Michel Wermelinger

As dynamic software architecture use becomes more widespread, a variety of formal specification languages have been developed to gain a better understanding of the foundations of this type of software evolutionary change. In this paper we survey 14 formal specification approaches based on graphs, process algebras, logic, and other formalisms. Our survey will evaluate the ability of each approach to specify self-managing systems as well as the ability to address issues regarding expressiveness and scalability. Based on the results of our survey we will provide recommendations on future directions for improving the specification of dynamic software architectures, specifically self-managed architectures.


human factors in computing systems | 2003

Hands on cooking: towards an attentive kitchen

Jeremy S. Bradbury; Jeffrey S. Shell; Craig B. Knowles

To make human computer interaction more transparent, different modes of communication need to be explored. We present eyeCOOK, a multimodal attentive cookbook to help a non-expert computer user cook a meal. The user communicates using eye-gaze and speech commands, and eyeCOOK responds visually and/or verbally, promoting communication through natural human input channels without physically encumbering the user. Our goal is to improve productivity and user satisfaction without creating additional requirements for user attention. We describe how the user interacts with the eyeCOOK prototype and the role of this system in an Attentive Kitchen.


foundations of software engineering | 2003

Evaluating and improving the automatic analysis of implicit invocation systems

Jeremy S. Bradbury; Juergen Dingel

Model checking and other finite-state analysis techniques have been very successful when used with hardware systems and less successful with software systems. It is especially difficult to analyze software systems developed with the implicit invocation architectural style because the loose coupling of their components increases the size of the finite state model. In this paper we provide insight into the larger problem of how to make model checking a better analysis and verification tool for software systems. Specifically, we will extend an existing approach to model checking implicit invocation to allow for the modeling of larger and more realistic systems. Our focus will be on improving the representation of events, event delivery policies and event-method bindings. We also evaluate our technique on two non-trivial examples. In one of our examples, we will show how with iterative analysis a system parameter can be chosen to meet the appropriate system requirements.


Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION (TAICPART-MUTATION 2007) | 2007

Comparative Assessment of Testing and Model Checking Using Program Mutation

Jeremy S. Bradbury; James R. Cordy; Juergen Dingel

Developing correct concurrent code is more difficult than developing correct sequential code. This difficulty is due in part to the many different, possibly unexpected, executions of the program, and leads to the need for special quality assurance techniques for concurrent programs such as randomized testing and state space exploration. In this paper an approach is used that assesses testing and formal analysis tools using metrics to measure the effectiveness and efficiency of each technique at finding concurrency bugs. Using program mutation, the assessment method creates a range of faulty versions of a program and then evaluates the ability of various testing and formal analysis tools to detect these faults. The approach is implemented and automated in an experimental mutation analysis framework (ExMAn) which allows results to be more easily reproducible. To demonstrate the approach, we present the results of a comparison of testing using the IBM tool ConTest and model checking using the NASA tool Java PathFinder (JPF).


international symposium on software reliability engineering | 2006

ExMAn: A Generic and Customizable Framework for Experimental Mutation Analysis

Jeremy S. Bradbury; James R. Cordy; Juergen Dingel

Current mutation analysis tools are primarily used to compare different test suites and are tied to a particular programming language. In this paper we present the ExMAn experimental mutation analysis framework - ExMAn is automated, general and flexible and allows for the comparison of different quality assurance techniques such as testing, model checking, and static analysis. The goal of ExMAn is to allow for automatic mutation analysis that can be reproduced by other researchers. After describing ExMAn, we present a scenario of using ExMAn to compare testing with static analysis of temporal logic properties. We also provide both the benefits and the current limitations of using our framework.


workshop on program analysis for software tools and engineering | 2005

An empirical framework for comparing effectiveness of testing and property-based formal analysis

Jeremy S. Bradbury; James R. Cordy; Juergen Dingel

Today, many formal analysis tools are not only used to provide certainty but are also used to debug software systems - a role that has traditional been reserved for testing tools. We are interested in exploring the complementary relationship as well as tradeoffs between testing and formal analysis with respect to debugging and more specifically bug detection. In this paper we present an approach to the assessment of testing and formal analysis tools using metrics to measure the quantity and efficiency of each technique at finding bugs. We also present an assessment framework that has been constructed to allow for symmetrical comparison and evaluation of tests versus properties. We are currently beginning to conduct experiments and this paper presents a discussion of possible outcomes of our proposed empirical study.


source code analysis and manipulation | 2010

How Good is Static Analysis at Finding Concurrency Bugs

Devin Kester; Martin Mwebesa; Jeremy S. Bradbury

Detecting bugs in concurrent software is challenging due to the many different thread interleavings. Dynamic analysis and testing solutions to bug detection are often costly as they need to provide coverage of the interleaving space in addition to traditional black box or white box coverage. An alternative to dynamic analysis detection of concurrency bugs is the use of static analysis. This paper examines the use of three static analysis tools (Find Bugs, J Lint and Chord) in order to assess each tools ability to find concurrency bugs and to identify the percentage of spurious results produced. The empirical data presented is based on an experiment involving 12 concurrent Java programs.


source code analysis and manipulation | 2005

Implementation and verification of implicit-invocation systems using source transformation

Hongyu Zhang; Jeremy S. Bradbury; James R. Cordy; Juergen Dingel

In this paper we present a source transformation-based framework to support uniform testing and model checking of implicit-invocation software systems. The framework includes a new domain-specific programming language, the Implicit-Invocation Language (IIL), explicitly designed for directly expressing implicit-invocation software systems, and a set of formal rule-based source transformation tools that allow automatic generation of both executable and formal verification artifacts. We provide details of these transformation tools, evaluate the framework in practice, and discuss the benefits of formal automatic transformation in this context. Our approach is designed not only to advance the state-of-the-art in validating implicit-invocation systems, but also to further explore the use of automated source transformation as a uniform vehicle to assist in the implementation, validation and verification of programming languages and software systems in general.


Proceedings of the First International Workshop on Realizing AI Synergies in Software Engineering | 2012

Predicting mutation score using source code and test suite metrics

Kevin Jalbert; Jeremy S. Bradbury

Mutation testing has traditionally been used to evaluate the effectiveness of test suites and provide confidence in the testing process. Mutation testing involves the creation of many versions of a program each with a single syntactic fault. A test suite is evaluated against these program versions (mutants) in order to determine the percentage of mutants a test suite is able to identify (mutation score). A major drawback of mutation testing is that even a small program may yield thousands of mutants and can potentially make the process cost prohibitive. To improve the performance and reduce the cost of mutation testing, we propose a machine learning approach to predict mutation score based on a combination of source code and test suite metrics.


international conference on software maintenance | 2010

Using clone detection to identify bugs in concurrent software

Kevin Jalbert; Jeremy S. Bradbury

In this paper we propose an active testing approach that uses clone detection and rule evaluation as the foundation for detecting bug patterns in concurrent software. If we can identify a bug pattern as being present then we can localize our testing effort to the exploration of interleavings relevant to the potential bug. Furthermore, if the potential bug is indeed a real bug, then targeting specific thread interleavings instead of examining all possible executions can increase the probability of the bug being detected sooner.

Collaboration


Dive into the Jeremy S. Bradbury's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Jalbert

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael A. Miljanovic

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Kelk

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher Collins

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Devin Kester

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gowritharan Maheswara

University of Ontario Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge