Rob Pooley
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rob Pooley.
foundations of software engineering | 1998
Perdita Stevens; Rob Pooley
The reengineering of legacy systems --- by which we mean those that have value and yet significantly resist modification and evolution to meet new and constantly changing business requirements --- is widely recognised as one of the most significant challenges facing software engineers. The problem is widespread, affecting all kinds of organisations; serious, as failure to reengineer can hamper an organisations attempts to remain competitive; and persistent, as there seems no reason to be confident that todays new systems are not also tomorrows legacy systems.This paper argues1. that the main problem is not that the necessary expertise does not exist, but rather, that it is hard for software engineers to become expert;2. that the diversity of the problem domain poses problems for conventional methodological approaches;3. that an approach via systems reengineering patterns can help.We support our contention by means of some candidate patterns drawn from our own experience and published work on reengineering. We discuss the scope of the approach, how work in this area can proceed, and in particular how patterns may be identified and confirmed.
MMB '95 Proceedings of the 8th International Conference on Modelling Techniques and Tools for Computer Performance Evaluation: Quantitative Evaluation of Computing and Communication Systems | 1995
Rob Pooley
Discrete event simulation has grown up as a practical technique for estimating the quantitative behaviour of systems where measurement is impractical. It is also used to understand the functional behaviour of such systems. This paper presents an approach to understanding the correctness of the behaviour of discrete event simulation models, using a technique from concurrency, Milners Calculus of Communicating Systems (CCS), and deriving their behavioural properties without resorting to simulation. It is shown that a common framework based on the process view of models can be constructed, using a hierarchical graphical modelling language (Extended Activity Diagrams). This language maps onto the major constructs of the DEMOS discrete event simulation language and their equivalent CCS models. A graphically driven tool is presented, which generates models to use simulation to answer performance questions (what is the throughput under a certain load) and functional techniques to answer behavioural questions (will the system behave as expected under certain assumptions), with an example of its application to a typical model.
Archive | 1997
Alyson Wood; Rob Pooley; Lyn C. Thomas
A flow-line, is a production line that is arranged as a series of stations in tandem. The throughput of the line is governed by the speed of the machines, the amount of work performed by each machine, and the buffer space between the machines. The bowlxad phenomenon occurs if the throughput of the flow-line can be increased by allocating resources such as processing capacity or buffer space unequally with more resources in the centre of the line and less towards the two ends. The problem addressed principally in this paper, is that of designing the best flow line, by allocating resources to the stations so as to minimise the average time in the system per customer (sojourn time) and therefore maximise the throughput of the system. The effects of varying buffer allocations are investigated by building a discrete event simulation model of the system, the primary aim being to provide implementable and practical rules for flow-line design. The results show that the choice of buffer allocations is dependent to some extent on the arrival process. These results are then formalised as rules which serve as practical guidelines in flow-line design. The work is related to further simulations of the effects of processing rates per station which showed that these had a dominant effect and allowed combined rules to be established.
Archive | 1996
Rob Pooley
This paper examines how to make use of object oriented design and programming approaches in discrete event simulation. It reviews the concepts of classes and objects in general terms. From these it outlines the structure of an object oriented simulation package and its construction in C++. This is compared with SIMULAs original implementation of such facilities. Using the C++ package performance models are constructed along process based object oriented lines and the ideas of component based, hierarchical modelling are examined.
conference on advanced information systems engineering | 1991
Jane Hillston; Andreas L. Opdahl; Rob Pooley
This paper presents a tool for creating and running experiments within a performance modelling environment, and a practical case study through which its key features are illustrated. The case study is concerned with optimising the load arrangement of a hospital information system. The paper describes the experiments created to support the case study in a textual and a graphical format.
7th UK Computer and Telecommunications Performance Engineering Workshop | 1991
Rob Pooley; Jane Hillston
A perspective is offered for viewing the history, and projecting the future, of tools and environments for performance analysis. This is derived by attempting to capture performance analysis as a process and then considering how this process may be supported efficiently from the user point of view. It is argued that such a conceptual modelling approach will enable developers to minimise the load on users in making use of the wide and growing range of tools and techniques on offer.
Software Engineering Journal | 1989
Rob Pooley
Performance modelling and estimation of computer systems is a vital facility for any software engineer. Thus any computer-aided software engineering environment should contain tools to support this. The need to provide tools to support systematic experimentation with performance models is examined and a solution is discussed, building on the idea of an experimental frame, originally introduced by Zeigler in Ref. 1. The inter-working of these tools and the support tools in a proposed integrated modelling support environment is considered. The tools discussed are an experimental plan editor, an experiment executor and an animator. This report is based on a deliverable***! of the Alvey project SE/059 ‘Improved methods for performance modelling (SIMMER)’, but has been edited to be more self-contained.
parallel computing | 1998
Gordon J. Brebner; Rob Pooley
ECOLE is a project concerned with the implementation of highly efficient parallel computation on a network of generic workstations connected by a very high speed optical LAN. It is founded on research conducted at BT Laboratories, which has resulted in SynchroLan — a multi-gigabit LAN billed as the fastest LAN in the world. A holistic approach to system design is taken, to identify all possible hardware and software improvements that might allow a workstation to harness fully the raw capacity of a multi-gigabit network. Two important features of the work are the inclusion of configurable hardware and the use of active protocols — both of these are to introduce overall flexibility, without compromising efficiency by more than a small amount. The work benefits from the use of advanced techniques for modelling and simulation of experimental architectures. This paper explains how the ECOLE project acts as a very practical focus for several existing lines of advanced systems research being undertaken at the University of Edinburgh.
Archive | 1996
Stephen Cusack; Rob Pooley
This paper outlines ongoing work in the performance evaluation of ATM networks through simulation. The use of analytical models to predict the performance of ATM networks is difficult due to the characteristics of the expected data traffic. Simulation is also problematic as the expected cell loss probability and call blocking rates in ATM networks are very low. Simulating at the individual cell level will require long and computationally intensive simulation runs to give statistically valid results. This work aims to investigate the feasibility of creating suitably efficient simulation tools which can be coupled with appropriate workload models to provide a greater insight into the performance of ATM networks.
measurement and modeling of computer systems | 1992
Rob Pooley
To those working in the field of performance, Connie Smith should need no introduction. She is the author of many papers which have sought to make accessible the techniques of performance analysis and prediction to practising software designers. She is probably the first to have used the term performance engineering to describe the application of such techniques to software systems. The publication of a book which encapsulates her ideas is therefore of considerable interest.