Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Patrick is active.

Publication


Featured researches published by Matthew Patrick.


international conference on software testing verification and validation | 2012

MESSI: Mutant Evaluation by Static Semantic Interpretation

Matthew Patrick; Manuel Oriol; John A. Clark

Mutation testing is effective at measuring the adequacy of a test suite, but it can be computationally expensive to apply all the test cases to each mutant. Previous research has investigated the effect of reducing the number of mutants by selecting certain operators, sampling mutants at random, or combining them to form new higher-order mutants. In this paper, we propose a new approach to the mutant reduction problem using static analysis. Symbolic representations are generated for the output along the paths through each mutant and these are compared with the original program. By calculating the range of their output expressions, it is possible to determine the effect of each mutation on the program output. Mutants with little effect on the output are harder to kill. We confirm this using random testing and an established test suite. Competent programmers are likely to only make small mistakes in their programming code. We argue therefore that test suites should be evaluated against those mutants that are harder to kill without being equivalent to the original program.


international conference on software testing verification and validation workshops | 2013

Using Mutation Analysis to Evolve Subdomains for Random Testing

Matthew Patrick; Robert Alexander; Manuel Oriol; John A. Clark

Random testing is inexpensive, but it can also be inefficient. We apply mutation analysis to evolve efficient subdomains for the input parameters of eight benchmark programs that are frequently used in testing research. The evolved subdomains can be used for program analysis and regression testing. Test suites generated from the optimised subdomains outperform those generated from random subdomains with 10, 100 and 1000 test cases for uniform, Gaussian and exponential sampling. Our subdomains kill a large proportion of mutants for most of the programs we tested with just 10 test cases.


Journal of Systems and Software | 2015

Subdomain-based test data generation

Matthew Patrick; Robert Alexander; Manuel Oriol; John A. Clark

We optimise subdomains for input regions that are more likely to reveal faults.This reduces the number of test cases required to achieve a high mutation score.Subdomains also reveal information about the behaviour of the program under test.Information provided by subdomains helps to reduce the effort needed to create oracles. Considerable effort is required to test software thoroughly. Even with automated test data generation tools, it is still necessary to evaluate the output of each test case and identify unexpected results. Manual effort can be reduced by restricting the range of inputs testers need to consider to regions that are more likely to reveal faults, thus reducing the number of test cases overall, and therefore reducing the effort needed to create oracles. This article describes and evaluates search-based techniques, using evolution strategies and subset selection, for identifying regions of the input domain (known as subdomains) such that test cases sampled at random from within these regions can be used efficiently to find faults. The fault finding capability of each subdomain is evaluated using mutation analysis, a technique that is based on faults programmers are likely to make. The resulting subdomains kill more mutants than random testing (up to six times as many in one case) with the same number or fewer test cases. Optimised subdomains can be used as a starting point for program analysis and regression testing. They can easily be comprehended by a human test engineer, so may be used to provide information about the software under test and design further highly efficient test suites.


computational intelligence and games | 2010

Online evolution in Unreal Tournament 2004

Matthew Patrick

Traditional approaches to game AI often feature behaviour that is scripted and predictable. Previous attempts at adaptive AI have struggled to get agents to learn quickly enough. This paper aims to show adaptation using online evolution is feasible and that it can can be incorporated with minimal change to the existing AI. A new approach is presented for evolving game agents online using an Evolution Strategy.


international symposium on software testing and analysis | 2016

Testing stochastic software using pseudo-oracles

Matthew Patrick; Andrew Peter Craig; Nicholas James Cunniffe; Matthew Parry; Christopher A. Gilligan

Stochastic models can be difficult to test due to their complexity and randomness, yet their predictions are often used to make important decisions, so they need to be correct. We introduce a new search-based technique for testing implementations of stochastic models by maximising the differences between the implementation and a pseudo-oracle. Our technique reduces testing effort and enables discrepancies to be found that might otherwise be overlooked. We show the technique can identify differences challenging for humans to observe, and use it to help a new user understand implementation differences in a real model of a citrus disease (Huanglongbing) used to inform policy and research.


Information & Software Technology | 2017

KD-ART

Matthew Patrick; Yue Jia

Context: Adaptive Random Testing (ART) spreads test cases evenly over the input domain. Yet once a fault is found, decisions must be made to diversify or intensify subsequent inputs. Diversification employs a wide range of tests to increase the chances of finding new faults. Intensification selects test inputs similar to those previously shown to be successful.Objective: Explore the trade-off between diversification and intensification to kill mutants.Method: We augment Adaptive Random Testing (ART) to estimate the Kernel Density (KD-ART) of input values found to kill mutants. KD-ART was first proposed at the 10th International Workshop on Mutation Analysis. We now extend this work to handle real world non numeric applications. Specifically we incorporate a technique to support programs with input parameters that have composite data types (such as arrays and structs).Results: Intensification is the most effective strategy for the numerical programs (it achieves 8.5% higher mutation score than ART). By contrast, diversification seems more effective for programs with composite inputs. KD-ART kills mutants 15.4 times faster than ART.Conclusion: Intensify tests for numerical types, but diversify them for composite types.


Archive | 2016

Metaheuristic Optimisation and Mutation-Driven Test Data Generation

Matthew Patrick

Metaheuristic optimisation techniques can be used in combination with mutation analysis to generate test data that is effective at finding faults and reduces the human effort involved in software testing. This chapter describes and evaluates various different metaheuristic techniques and considers their underlying properties in relation to test data generation. This represents the first attempt to bring together, compare and review ideas and research related to mutation analysis and metaheuristic optimisation. The intention is that by considering these application areas together, we can appreciate and understand important aspects of their strengths and weaknesses. This will allow us to make suggestions with regards to the ways in which they may be used together for maximum effectiveness and efficiency.


international conference on software testing verification and validation workshops | 2014

Probability-Based Semantic Interpretation of Mutants

Matthew Patrick; Robert Alexander; Manuel Oriol; John A. Clark

Mutation analysis is a stringent and powerful technique for evaluating the ability of a test suite to find faults. It generates a large number of mutants and applies the test suite to them one at a time. As mutation analysis is computationally expensive, it is usually performed on a subset of mutants. The competent programmer hypothesis suggests that experienced software developers are more likely to make small mistakes. It is prudent therefore to focus on semantically small mutants that represent mistakes developers are likely to make. We previously introduced a technique to assess mutant semantics using static analysis by comparing the numerical range of their symbolic output expressions. This paper extends our previous work by considering the probability the output of a mutant is the same as the original program. We show how probability-based semantic interpretation can be used to select mutants that are semantically more similar than those selected by our previous technique. In addition to numerical outputs, it also provides support for modelling the semantics of Boolean variables, strings and composite objects.


international conference on software testing verification and validation workshops | 2015

Kernel Density Adaptive Random Testing

Matthew Patrick; Yue Jia

Mutation analysis is used to assess the effectiveness of a test data generation technique at finding faults. Once a mutant is killed, decisions must be made whether to diversify or intensify the subsequent test inputs. Diversification employs a wide range of test inputs with the aim of increasing the chances of killing new mutants. By contrast, intensification selects test inputs which are similar to those previously shown to be successful, taking advantage of overlaps in the conditions under which mutants can be killed. This paper explores the trade-off between diversification and intensification by augmenting Adaptive Random Testing (ART) to estimate the Kernel Density (KD-ART) of input values which are found to kill mutants. The results suggest that intensification is typically more effective at finding faults than diversification. KD-ART (intensify) achieves 7.24% higher mutation score on average than KD-ART (diversify). Moreover, KD-ART is computationally less expensive than ART. The new technique requires an average 5.98% of the time taken before.


symposium on search based software engineering | 2013

Efficient Subdomains for Random Testing

Matthew Patrick; Robert Alexander; Manuel Oriol; John A. Clark

Opinion is divided over the effectiveness of random testing. It produces test cases cheaply, but struggles with boundary conditions and is labour intensive without an automated oracle. We have created a search-based testing technique that evolves multiple sets of efficient subdomains, from which small but effective test suites can be randomly sampled. The new technique handles boundary conditions by targeting different mutants with each set of subdomains. It achieves an average 230% improvement in mutation score over conventional random testing.

Collaboration


Dive into the Matthew Patrick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yue Jia

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge