Leonardo Bottaci
University of Hull
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Leonardo Bottaci.
The Lancet | 1997
Leonardo Bottaci; Philip J. Drew; John E. Hartley; Matthew B Hadfield; R. Farouk; P. W. R. Lee; Iain Mc Macintyre; G. S. Duthie; John R. T. Monson
BACKGROUND Artificial neural networks are computer programs that can be used to discover complex relations within data sets. They permit the recognition of patterns in complex biological data sets that cannot be detected with conventional linear statistical analysis. One such complex problem is the prediction of outcome for individual patients treated for colorectal cancer. Predictions of outcome in such patients have traditionally been based on population statistics. However, these predictions have little meaning for the individual patient. We report the training of neural networks to predict outcome for individual patients from one institution and their predictive performance on data from a different institution in another region. METHODS 5-year follow-up data from 334 patients treated for colorectal cancer were used to train and validate six neural networks designed for the prediction of death within 9, 12, 15, 18, 21, and 24 months. The previously trained 12-month neural network was then applied to 2-year follow-up data from patients from a second institution; outcome was concealed. No further training of the neural network was undertaken. The networks predictions were compared with those of two consultant colorectal surgeons supplied with the same data. FINDINGS All six neural networks were able to achieve overall accuracy greater than 80% for the prediction of death for individual patients at institution 1 within 9, 12, 15, 18, 21, and 24 months. The mean sensitivity and specificity were 60% and 88%. When the neural network trained to predict death within 12 months was applied to data from the second institution, overall accuracy of 90% (95% CI 84-96) was achieved, compared with the overall accuracy of the colorectal surgeons of 79% (71-87) and 75% (66-84). INTERPRETATION The neural networks were able to predict outcome for individual patients with colorectal cancer much more accurately than the currently available clinicopathological methods. Once trained on data from one institution, the neural networks were able to predict outcome for patients from an unrelated institution.
Software Testing, Verification & Reliability | 1999
Elfurjani Sassi Mresa; Leonardo Bottaci
This paper investigates the mutation scores achieved by individual operators of the Mothra mutation system and their associated costs in order to determine the most efficient operators. The cost of mutation analysis includes both test set generation and equivalent mutant detection. The score and cost information is then used as a heuristic for choosing a subset of the operators for use in efficient selective mutation testing. Experiments were performed using a sample of 11 programs and a number of test sets for each program. The results show that the use of efficient operators can provide significant efficiency gains for selective mutation if the acceptable mutation score is not very close to one. When mutation scores very close to one are required, a randomly selected proportion of the mutants provides a more efficient strategy than a subset of efficient operators. Copyright
Journal of Intelligent Manufacturing | 1997
G. G. Rogers; Leonardo Bottaci
It is well recognized that manufacturers of consumer goods throughout the world are facing major new demands, including shorter product life-cycles and increasing competition. In response, companies are restructuring and moving away from traditional process-centred work practices in favour of concurrent engineering methods. In particular, design for manufacture has gained widespread recognition as a means of reducing production costs and lead times. However, optimal design for manufacture is difficult to achieve using current-day work organization and business structures. An underlying problem is the lack of a scientific framework for production. To address this need, this paper proposes a radical and far-reaching new manufacturing paradigm based upon on building production systems from standardized modular machines. The manufacturing concept, termed modular production systems (MPS), is aimed specifically at ’hard‘ low- to medium-technology consumer products, as typified by goods such as children‘s toys and kitchen appliances. The rationale for MPS as a means of enabling concurrent product and production system design is put forward, and the long-term implications and work required to establish the concept are discussed.
Software Testing, Verification & Reliability | 2006
Mohammad Alshraideh; Leonardo Bottaci
This paper presents a novel approach to automatic software test data generation, where the test data is intended to cover program branches which depend on string predicates such as string equality, string ordering and regular expression matching. A search‐based approach is assumed and some potential search operators and corresponding evaluation functions are assembled. Their performance is assessed empirically by using them to generate test data for a number of test programs. A novel approach of using search operators based on programming language string operators and parameterized by string literals from the program under test is introduced. These operators are also assessed empirically in generating test data for the test programs and are shown to provide a significant increase in performance. Copyright
genetic and evolutionary computation conference | 2003
Leonardo Bottaci
Several researchers are using evolutionary search methods to search for test data with which to test a program. The fitness or cost function depends on the test goal but almost invariably an important component of the cost function is an estimate of the cost of satisfying a predicate expression as might occur in branches, exception conditions, etc. This paper reviews the commonly used cost functions and points out some deficiencies. Alternative cost functions are proposed to overcome these deficiencies. The evidence from an experiment is that they are more reliable.
international conference on computer safety reliability and security | 2007
Martin Walker; Leonardo Bottaci; Yiannis Papadopoulos
HiP-HOPS (Hierarchically-Performed Hazard Origin and Propagation Studies) is a recent technique that partly automates Fault Tree Analysis (FTA) by constructing fault trees from system topologies annotated with component-level failure specifications. HiP-HOPS has hitherto created only classical combinatorial fault trees that fail to capture the often significant temporal ordering of failure events. In this paper, we propose temporal extensions to the fault tree notation that can elevate HiP-HOPS, and potentially other FTA techniques, above the classical combinatorial model of FTA. We develop the formal foundations of a new logic to represent event sequences in fault trees using Priority-AND, Simultaneous-AND, and Priority-OR gates, and present a set of temporal laws to identify logical contradictions and remove redundancies in temporal fault trees. By qualitatively analysing these temporal trees to obtain ordered minimal cut-sets, we show how these extensions to FTA can enhance the safety of dynamic systems.
Software Quality Journal | 2010
Mohammad Alshraideh; Leonardo Bottaci; Basel A. Mahafzah
Finding test data to cover structural test coverage criteria such as branch coverage is largely a manual and hence expensive activity. A potential low cost alternative is to generate the required test data automatically. Search-based test data generation is one approach that has attracted recent interest. This approach is based on the definition of an evaluation or cost function that is able to discriminate between candidate test cases with respect to achieving a given test goal. The cost function is implemented by appropriate instrumentation of the program under test. The candidate test is then executed on the instrumented program. This provides an evaluation of the candidate test in terms of the “distance” between the computation achieved by the candidate test and the computation required to achieve the test goal. Providing the cost function is able to discriminate reliably between candidate tests that are close or far from covering the test goal and the goal is feasible, a search process is able to converge to a solution, i.e., a test case that satisfies the coverage goal. For some programs, however, an informative cost function is difficult to define. The operations performed by these programs are such that the cost function returns a constant value for a very wide range of inputs. A typical example of this problem arises in the instrumentation of branch predicates that depend on the value of a Boolean-valued (flag) variable although the problem is not limited to programs that contain flag variables. Although methods are known for overcoming the problems of flag variables in particular cases, the more general problem of a near constant cost function has not been tackled. This paper presents a new heuristic for directing the search when the cost function at a test goal is not able to differentiate between candidate test inputs. The heuristic directs the search toward test cases that produce rare or scarce data states. Scarce inputs for the cost function are more likely to produce new cost values. The proposed method is evaluated empirically for a number of example programs for which existing methods are inadequate.
industrial and engineering applications of artificial intelligence and expert systems | 2000
Marcos A. Rodrigues; Yonghuai Liu; Leonardo Bottaci; Dimitrios I. Rigas
In this paper we present a novel approach to modelling a manufacturing process that allows one to learn about causal mechanisms of manufacturing defects through a Process Modelling and Executable Bayesian Network (PMEBN). The method combines probabilistic reasoning with time dependent parameters which are of crucial interest to quality control in manufacturing environments. We demonstrate the concept through a case study of a caravan manufacturing line using inspection data.
international conference on software testing, verification and validation workshops | 2010
Leonardo Bottaci
It is commonly accepted that strong typing is useful for revealing programmer errors and so the use of dynamically typed languages increases the importance of software testing. Mutation analysis is a demanding software testing criterion. Although mutation analysis has been applied to procedural languages, and object oriented languages, little work has been done on the mutation analysis of programs written in dynamically typed languages. Mutation analysis depends on the substitution and modification of program elements. In a strongly typed language, the declared type of the mutated element, a variable or operator, can be used to avoid generating type-incorrect substitutions or modifications. Ina dynamically typed language, this type information is not available and so a much greater range of mutations are potentially applicable but many of the resulting mutants are likely to be incompetent (too easily killed). This paper describes a mutation analysis method in which the definition of mutants is performed at run-time when type information is available. The type information can be used to avoid generating incompetent mutants.
conference on advanced information systems engineering | 1996
Nikolay Mehandjiev; Leonardo Bottaci
Organisations that adapt rapidly require flexible software systems. Conventional system development methods are too slow for these organisations. One way to alleviate these problems is to empower members of the organisation, domain experts, to directly control and modify such systems (user enhanceability). This paper considers the applicability of visual languages as an enabling tool for user enhanceability. Previous systems in this area have succeeded only for narrow application domains and have failed to scale up. This paper highlights some major problems in such an endeavour, presents a generic architecture that addresses these problems and discusses a user enhanceable system for workflow applications that was implemented using this architecture.