Derrick Morris
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Derrick Morris.
Archive | 1979
Derrick Morris; Roland N. Ibbett
Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading mu5 computer system, you can take more advantages with limited budget.
Archive | 1996
Derrick Morris; Colin J. Theaker; Peter Green; Gareth Evans
1 Introduction to Computer Systems.- 1.1 Structure of the Book.- 1.2 Definition of Computer Systems.- 1.3 Computer Systems Technology.- 1.4 Introduction to Computer Systems Engineering.- 1.5 The Characteristics of Computer Systems.- 1.6 Recorded Experiences with Computer Systems.- 2 Engineering Computer Systems.- 2.1 Terminology of the Development Process.- 2.2 Software Engineering Paradigms.- 2.3 Approaches to Computer System Development.- 2.4 System Development Tools.- 2.5 Model-based Object Oriented Systems Engneering (MOOSE).- 3 Methods of Analysis and Design.- 3.1 Structured Methods.- 3.2 Object Oriented Software Development.- 3.3 Concluding Remarks.- 4 An Object Oriented Notation for Computer System Models.- 4.1 Features of the Notation.- 4.2 Extending the Mechanisms of Object Orientation.- 4.3 Definition of the MOOSE notation.- 4.4 Summary.- 5 Developing a Computer System Architecture.- 5.1 The MOOSE Architectural Models.- 5.2 Analysing and Classifying Requirements.- 5.3 Creating a MOOSE Behavioural Model.- 5.4 Constructing the Domain Model.- 5.5 Summary.- 6 Creating an Executable Model of a Computer System.- 6.1 Creating an Executable Model.- 6.2 Creating Class Definitions for Primitive Objects.- 6.3 Comparing an Executable Model to an Implementation.- 6.4 The Dynamics of an Executable Model.- 6.5 Simulating the Execution of a MOOSE Model.- 6.6 Using an Executable Model.- 6.7 Summary.- 7 Designing to Meet Constraints.- 7.1 Constraints on the Design Process.- 7.2 Evaluating Non-functional Requirements.- 7.3 Frameworks for Evaluating Non-functional Requirements.- 7.4 Non-functional Requirements and the MOOSE Paradigm.- 8 Partitioning and Detailing a Computer System Design.- 8.1 The Method of Transformational Codesign.- 8.2 Transformation of the Executable Model.- 8.3 The Platform Model.- 8.4 Transforming the Platform Model.- 8.5 Synthesising an Implementation.- 9 Pragmatics of Using MOOSE.- 9.1 The Use of Standard System Software.- 9.2 The Physical Construction and Packaging of Hardware.- 9.3 Implementation of Hardware.- 9.4 Evaluating Performance by Simulation.- 10 Concluding Remarks.- Appendix 1 MOOSE Workbench User Guide.- A1.1 Installation and Operation.- A1.2 Operations for Manipulating Projects.- A1.3 Entering the Capture Facilities.- A1.4 The MOOSE Diagram Editor.- Al.5 Textual Specifications in an Executable Model.- Appendix 2 Ward-Mellor Model of the Mine Pump Control System.- A2.1 The Transformation Schema.- A2.2 Data Dictionary.- A2.3 PSPECs.- Appendix 3 Moose Models for Mine Pump Control System.- A3.1 The Behavioural Model.- A3.2 Extensions to Make the Model Executable.- A3.3 The Committed Model.- Appendix 4 VCR Control System.- A4.1 The Behavioural Model.- A4.2 Extensions to Make the Model Executable.- A4.3 The Committed Model.- Appendix 5 Dynamic Object Creation.- A5.1 The Behavioural Model.- A5.2 Extensions to Make the Model Executable.- References.data types 52 abstractions 39, 51, 53, 65, 68, 83,100-101, 110-111,118,124,161,165 active functions 127-129, 136-137, 140-144, 165,185-186,189,194,205-207,216, 218-221 analogue components 196,208-209 analysis 3, 5, 8,10-13,15,17,23,25,29, 32-35,38-39,41-45,47-51,53-57,59-62, 64,66,69,71,92-96,102,116,131, 145-147,156-158,161-162, 168,180, 185-186,194,203,207,224 analysis paralysis 45 analytical calculation 13 Application Platform 188-189 Application Specific Integrated Circuits (ASICs) 5,37,208-211,222 application specific 3, 5, 11,30,33,37,45, 111,171-173,208,223 Architectural Model 118 ATMOSPHERE 31, 48 backplanes 207, 211 Behavioural Model 2, 38-39,43, 63-64, 68, 71-72,78,81-83,86,88,90,95-97, 99-101,107-110,112,114-116,118,121, 123-124,127,144-145, 153-154, 163-164, 168,172,176,210,214 built-in functions 130, 134,218, 123, 129-130, 198 bundles 72, 79, 83, 89, 104, 147, 188 bus structures 190, 193 busy waiting 136,186-187 C++ 5-6, 14,35,37,53,74,84,87-88, 102, 110,123-124,126-130,133-135,140, 154-155,157,165,172,175,188-190,192, 194-197,210,212-213,223-224 CASE 6, 9,16,19,22-23,32,37-38,42-43, 45,48-49,52-53,56,65,68,74,81,84-85, 89-90,92,99,101,104, 111-1l2, 114, 117, 135,139-140, 146, 148, 157-158, 160, 166, 173-174,178-180,186-187,192-194,198, 200-203,205-207,209-212,214-215,217, 219,221,223 child objects 68-69, 79, 85, 128 class hierarchies 69, 130 Class Implementation Diagrams (CIDs) 115, 123, 125 Class Interface Specifications (CISs) 123, 124, 212 class definitions 52, 65, 69, 79, 83-84, 93, 115-116, 123, l30-136, 192, 194-196, 212-2l3 classified requirements 97-99, 10 1, 118 Clementine satellite 18 CO COMO 16 codesign 2, 30, 32-33, 37, 46, 61, 66, 81-84, 104,108, 121, 135-136, 146, 148, 154-155, 162-163,167-168,170-177,180,183,185, 187,190,195,197,206,208,210,214, 221-222 cohesion 100, Ill, 115 commitment of objects 180-182 Committed Model 168, 172-175, 181, 184, 187-189,193-194,196,201,215,221-222 communication between objects 54, 66, 173 communication primitives 189, 193 COMPLEMENT 31, 102 component failure 14-15 computer system architecture 91, 93, 95, 97, 99,101,103,105,107,109, Ill, 113, 115, 117 Computer-based Systems Engineering (CBSE) 30-32 concurrency constraints 110, 112-1l3 concurrency 12, 15,26,30,60,75-77, 108-110, 1l2-1l3, 115, 128, 136, 140-144, 146,168,184-185,198-200,204,207, 212-220 configuration information 83 connection entry 88-89, 106 construction 2,17-18,37,44-45,91-92,107, 110,123,153,160-161,197,207-208 constructor function 127, 134, 194, 127, 150 context diagram 44-45, 48-49 context switching 200, 203 control processes 44, 55 control systems 14,42, l29 control transformations 45, 50 correlation constraints 163, 166, 168 co simulation 2, 210 cospecification 2 cost 5, 8-9, 12-13, 15-19,30,33,35,55, 92-93,96-99,121,135,157-158,177, 180-181,201-203,206,209,224
Annual Review of Automatic Programming | 1963
R. A. Brooker; I.R. MacCallum; Derrick Morris; J. S. Rohl
Publisher Summary This chapter presents detailed specification of a system for describing the form and meaning of the statements in a phrase structure language, for example, a scientific autocode. Given such a description, the compiler compiler will generate a compiler for the language, that is, a program that can read and translate another program written in that language. This system may be considered to operate in two phases. In the primary phase, it accepts and records the definition of a phrase structure language, and in the secondary phase it will translate a source program written in that language. The two phases are not completely separate and further definitions can be given in the middle of a source program. Their influence only extends forward and not back to the material already processed. The primary material consists mainly of format definitions and phrase definitions that describe the form of statements and their constituent expressions, and format routines that describe their meaning. The meaning of a new format is defined in terms of existing formats, which may be either built-in or previously defined ones. Five kinds of statements comprise the basic primary language and they are recognized by the following headings or master phrases: PHRASE, FORMAT CLASS, FORMAT, IGNORE, and ROUTINE.
Proceedings of the 1967 22nd national conference on | 1967
Derrick Morris; Frank H. Sumner; Michael T. Wyld
This report presents the performance of the Supervisor System used on the Atlas Computer at Manchester University, and describes some of the changes made as a result of our experience with the system. Although the machine is used jointly by I.C.T. Computing Service Division and the University Computing Service (U.C.S.), the figures presented are derived mainly from the U.C.S. use of the machine. We begin with an outline of the system and then describe its main sections in some detail. The ideas have been previously presented 1,2,3,4 but the system which has evolved is not described elsewhere. Logically the system is made up of several distinct parts which communicate through small well defined interfaces (the actual implementation is somewhat more complicated). It can be seen from Figure 1 that the path for a normal job is through the input supervisor and into the input well (which is at present magnetic tape). When the complete job has been input the input supervisor makes an entry in the job list, containing the information which the schedler uses to decide when to start the job. When the scheduler selects a job its job entry is passed on to the job assembler which organizes the loading of any magnetic tapes required by the job, and transfers the relevant compiler and input files into the main store.
The Computer Journal | 1972
P. C. Capon; Derrick Morris; Jeffrey S. Rohl; I. R. Wilson
At an early stage in the design of the MU5 software it was decided to introduce a compiler target language (CTL) into which the high level languages would be translated. For each high level language a translator would be provided to convert from the language to CTL while a single compiler converts from CTL to machine code. The objective was to simplify individual translators by forcing the CTL to as high a level as possible. For example, the CTL contains declarations with the characteristics of those found in high level languages so that name and property list management problems are passed to the CTL compiler. This scheme enables the mode of compilation, for example output in semi-compiled form or loading for immediate execution, to be determined within the CTL rather than within each translator. Subsequently, a further role for the CTL emerged. The MU5 translators could be used on a range of machines provided a CTL compiler could be written for each machine. This machine independence could extend over machines with significant structural differences provided the data and address formats were compatible. This idea is summarised in Fig. 1. It is similar to the UNCOL (Strong, Wegstein, Tritter, Olsztyn, Mock, and Steel, 1958) idea except that, whereas UNCOL attempted to span the significant differences between existing machines, the CTL has been designed to suit machines originating from MU5. There is, however, a more significant difference: the communication between the translators and the CTL compiler is two-way. Some of the CTL procedures return information to the translators. For example there is a procedure for interrogating property lists. It is this which allows the whole property and name list organisation to be contained within CTL. The CTL does not have to be encoded in character form by the translators then decoded by the CTL compiler. Instead there is a CTL procedure corresponding to each type of statement, so that the CTL is a body of procedures rather than a written language. The main input parameter of each procedure is a vector whose elements define the nature of the statement. In the case of an arithmetic assignment these elements comprise a sequence of operator operand pairs. Only a small increase in compile time results from using the CTL procedures to generate code, because they form part of a natural progression from source to object code.
Annual Review of Automatic Programming | 1961
R. A. Brooker; Derrick Morris
Publisher Summary This chapter describes the Mercury autocode language, which is largely a phrase structure language. Mercury autocode language provides a translation program for MERCURY autocode tapes on ATLAS. To understand the program, it is necessary to understand something about MERCURY autocode, in essence the source language, something about the target language, the order structure of ATLAS, and finally the meta-syntactical language of the assembly program. The instructions in ATLAS fall into two classes, namely, A-code instructions that relate to the floating-point accumulator; and B-code instructions that are concerned with setting and adjusting the contents of the 128 B-registers. The Ba and Bm parts of the instruction refer to 128 B-registers that are separate from the main store. In the A-code instructions, the contents of Ba and Bm are first added to the presumptive address to give the modified address. In the B-code instructions, Ba is used as a second operand, and the address is modified by Bm only. Except for the leading binary digit, extracode instructions have the same appearance and properties as basic instructions, and comprise both A-codes and B-codes.
parallel computing | 1992
Derrick Morris; D.G Evans
Abstract The research reported in this paper is concerned with techniques and tools, to be used for modelling complex computer systems, which in general involve concurrency and utilise parallelism. First the motivation and objectives are presented. These have led to the establishment of a method, supported by tools, which places modelling in a central and dominant role in a product lifecycle. The modelling technique and the language on which it is based are discussed. This language incorporates a combination of graphical and textual notation. An example is given which models the critical parts of an actual parallel processing system.
Software Engineering Journal | 1995
Derrick Morris; Peter Green; Richard Barker
The paper describes a method and notation for designing the software in embedded and other reactive systems. The design method is described in the context of a structured life-cycle, which recognises both functional and non-functional requirements, and it is illustrated by application to a substantial example. Mainly, for reasons of reuse and maintenance, an object-oriented solution is an implementation goal. The method focuses on producing software fit for its intended purpose in terms of user functionality, while being concerned with other aspects of product quality. It also seeks to efficiently utilise the varied skills and experience in a project team, and assist the team in distributing and meeting responsibilities. Commercially vailable CASE tools are adapted to support the method.
parallel computing | 1990
Derrick Morris; Colin J. Theaker; R. Phillips; D.G Evans
Abstract This paper reports on the initial stages of a research project involving the development of an experimental parallel computing system. The system is intended to support research in a variety of areas, including computer architectures, task models, language implementations and interconnection techniques. A pragmatic approach is being undertaken to evaluating techniques in these areas through applying the system to actual applications. The system is based on a hierarchical structure of processors, using shared memory as the communication medium. This paper identifies the main features of the hardware being used, and presents an outline of the initial software for task creation and management.
european design automation conference | 1996
David J. Evans; Peter Green; Derrick Morris
The paper describes MOOSE a full lifecycle, model based approach to the engineering of computer systems. It describes how early lifecycle models that represent the logical behaviour and architecture of a system can be transformed into representations that allow implementation source for both hardware and software to be synthesised.