Derek R. Wilson
University of Westminster
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Derek R. Wilson.
IEEE Transactions on Software Engineering | 1989
Jai Prakash Gupta; Stephen Winter; Derek R. Wilson
The authors describe CTDNet, a data-driven reduction machine for the concurrent execution of applicative functional programs in the form of lambda calculus expressions. Such programs are stored as binary-tree-structured process graphs in which all processes maintain pointers to their immediate neighbors (i.e. ancestor and two children). Processes are of two basic types: master processes, which represent the original process graph, and slave processes, which carry out the actual executional work and are dynamically created and destroyed. CTDNet uses a distributed eager evaluation scheme with a modification to evaluate conditional expressions lazily, together with a form of distributed string reduction with some graphlike modifications. >
Archive | 1996
Paul Rodgers; Alistair Patterson; Derek R. Wilson
The paper describes a formal methodology for defining and assessing product performance and its implementation in a prototype computer system. The methodology is based on high level abstract descriptions of the operations conducted within the design process. It is consequently extremely generic and succeeds in formally bridging the gap between physical product performance and actual end-user requirements. The methodology is based on defining product attributes as observable behaviour of the product in use. Defining an attribute in this way inherently reflects its required interaction with the end-user and consequently can truly be said to be in “end-user terms”.A product will have a range of attributes and a performance indicator is found by combining them in a way that reflects their relative importance to the end-user. At the conceptual stage of the design process, however, the actual product does not exist, only some representation of it. To assess products at this stage requires a model or simulation of its attributes. This methodology has been implemented in a prototype Computer Aided Design Evaluation Tool (CADET) and tested with an existing product range. An example of which is presented within the paper.
Archive | 1995
Paul Rodgers; Alistair Patterson; Derek R. Wilson
This work is concerned with using product performance assessment at the conceptual stage of the design process, to both assess the potential performance of a product proposal, and rationalise and make apparent the predominately intuitive decisions taken by the designer at this stage. Potentially, design draws on the entire body of human knowledge and experience plus the personal intuition and experience of the designer. Consequently the terms used to describe this process must be extremely abstract as they reflect common activities based in widely disparate sources of knowledge. It is anticipated that by adopting such an approach that a generic structuring of knowledge can be achieved in a way that will be of genuine use to the designer.
euromicro workshop on parallel and distributed processing | 1994
Nasser Kalantery; Stephen Winter; Derek R. Wilson
Parallel discrete event simulation (PDES) research has led to the development of distributed order preserving protocols. Generalization of these protocols offers the prospect of parallel and yet deterministic execution of sequential program. In this paper we introduce a simple but general method of extracting a logical time coordinate system from sequential programs and discuss some of the basic issues in the application of PDES paradigms to general purpose parallel computing.
euromicro workshop on parallel and distributed processing | 1993
Nasser Kalantery; Stephen Winter; Derek R. Wilson; A. P. Redfern
This paper describes techniques used to optimise the performance of parallel simulation of SS7 telecommunication networks. A basic parallel simulation model of an SS7 network and the conservative implementation of the model are discussed and experimental results on the performance of the simulator are examined. A technique for achieving further optimisation is proposed.<<ETX>>
Microprocessing and Microprogramming | 1990
Stephen Winter; Derek R. Wilson; D.F. Neale
Real-time processing systems are typically characterised by high processing rates, high reliability, and high input/output rates. Functional programming, derived from the lambda calculus, is a formal basis for computation which facilitates the design of well-structured, highly reliable programs, and also enables a rigorous implementation on parallel hardware to provide high processing rates and I/O bandwidth. The principles of CTDNet2, a new reduction mechanism for real-time applications based on graph reduction, and a combination of eager and lazy evaluation, are presented. CTDNet2 has been designed with real-time processing in mind, and is intended for highly parallel multiprocessor implementation. It is particularly suitable for transputer networks.
Microprocessing and Microprogramming | 1989
Stephen J. Flavell; Stephen Winter; Derek R. Wilson; P. Fernin
The demands made on vision systems by real time and satellite applications have outstretched tradional computing environments. In response research has focused on attempting to apply knowledge based techniques and exploit distributed computing architectures. This paper examines knowledge based frameworks for image understanding systems exploiting distributed computing environments. An evaluation of these frameworks is based on the implementation and modelling of an image registration subsystem.
parallel computing | 1997
F. Paganelli; Stephen Winter; Derek R. Wilson
This paper describes an architectural framework for virtually transparent monitoring of massively-parallel computers, which combines the principle of permanent probe monitoring with generic architectural models of the monitor, and the target parallel system. A virtually transparent monitor is one in which probe effects — namely, those effects which cause a monitored program to behave differently from the same, but unmonitored one — are effectively masked at the programming level. Permanent probe monitoring is a technique for realising virtual transparency, by allowing the software monitoring probes to remain permanently active within the target parallel system. The generic monitoring architecture introduced in the paper encompasses the description of a wide range of systems ranging from simple centralised monitors to highly-distributed ones. The framework has been validated and evaluated through the experimental realisation of a message communication monitor (Monitorix) in which the target system is a token-ring message router (Routix) for a transputer-based multiprocessor. Experimental results have shown the system to be reasonably efficient.
Integrated Manufacturing Systems | 1995
Paul Rodgers; Alastair C. Patterson; Derek R. Wilson
The actual success or failure of a product is measurable partially in terms of the commercial success of the organization producing it. Addresses how to estimate that success at the concept stage of the design process, prior to detailed design, when there is not yet a physical artefact, and no definite knowledge of how the market will respond to it, but simply some representation of it, for example, design drawings and 3‐D models. Describes a method for approaching this problem by establishing attributes (in “user terms”) which a product must have to enable it to achieve success. Presents an example of a toothbrush, determines the measurable attributes required from this product and describes methods for their evaluation.
international conference on parallel architectures and languages europe | 1994
Nasser Kalantery; Stephen Winter; Derek R. Wilson
Bulk synchronous parallel architecture incorporates a scalable and transparent communication model. The task-level synchronization mechanism of the machine, however, is not transparent to the user and can be inefficient when applied to the coordination of irregular parallelism. This paper presents a brief introduction to an alternative memory-level scheme which offers the prospect of achieving both efficient and transparent synchronization. This scheme, based on a discrete event simulation paradigm, supports sequential style of programming and, coupled with the BSP communication model, leads to the emergence of a virtual von Neumann parallel computer.