Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. B. Duff is active.

Publication


Featured researches published by Michael J. B. Duff.


Pattern Recognition | 1973

A cellular logic array for image processing

Michael J. B. Duff; D. M. Watson; Terry J. Fountain; G. K. Shaw

Abstract A cellular logic image processor employing 192 cells in a 16 by 12 hexagonal array is described. The processor has been constructed and its performance assessed. The various classes of functions which can be implemented in the cellular array are discussed and sample programs explained in detail.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1988

The CLIP7A image processor

Terry J. Fountain; K. N. Matthews; Michael J. B. Duff

A description is given of the CLIP7 image-processing chip. The device is implemented as a custom designed integrated circuit and contains a single processing element for use in arrays of processors. The chip uses 16-bit internal and 8-bit external data buses and divides crudely into two major sections: data processing and data input/output. The first structure to be assembled using these processors is a 256-element linear array, each element incorporating two of the CLIP7 processors. This system, known as CLIP7A, is used both to study the application of partial local autonomy techniques to image processing and also as a fast and convenient system for the emulation of other architectures. CLIP7A software and hardware are also described. >


IEEE Transactions on Very Large Scale Integration Systems | 1998

The use of nanoelectronic devices in highly parallel computing systems

Terry J. Fountain; Michael J. B. Duff; David G. Crawley; Christopher Tomlinson; Colin D. Moffat

The continuing development of smaller electronic devices into the nanoelectronic regime offers great possibilities for the construction of highly parallel computers. This paper describes work designed to discover the best ways to take advantage of this opportunity. Simulated results are presented which indicate that improvements in clock rates of two orders of magnitude, and in packing density of three orders of magnitude, over the best current systems, should be attainable. These results apply to the class of data-parallel computers, and their attainment demands modifications to the design which are also described. Evaluation of the requirements of alternative classes of parallel architecture is currently under way, together with a study of the vitally important area of fault-tolerance.


Pyramidal systems for computer vision | 1986

Pyramids—expected performance

Michael J. B. Duff

When any new computer architecture is proposed, questions are inevitably asked as to how systems based on the architecture can be expected to perform. It is not an unreasonable assumption that each proposal will represent an attempt to produce an improved optimisation against one or more criteria; ‘performance’, at best a vague, imprecisely defined term, is a measure of how successful that optimisation has been. In this paper, various classes of design criteria will be discussed, particularly in relation to pyramids applied to the processing of image data.


high performance computing for computational science (vector and parallel processing) | 2000

Thirty Years of Parallel Image Processing

Michael J. B. Duff

The history of the development of parallel computation methodology is closely linked with the development of techniques for the computer processing of images. In the early 60s, research in high energy particle physics began to generate extremely large numbers of particle track photographs to be analysed and attempts were made to devise automatic or semiautomatic systems to carry out the analysis. This stimulated the search for ways to build computers of increasingly higher performance since the size of the image data sets exceeded any which had previously been processed. At the same time, interest was growing in exploring the structure of the human visual system and it was felt intuitively that image processing computation should bear at least some resemblance to its human analogue. This review paper traces the simultaneous progress in these two related lines of research and discusses how their interaction influenced the design of many parallel processing computers and their associated algorithms.


Archive | 1984

Two-Dimensional Logical Transforms

Kendall Preston; Michael J. B. Duff

Cellular array transforms were first employed many years ago to perform “noise cleaning” operations on the input images of character recognition machines (Chapter 1). These transforms operated upon the bilevel or “binary” input generated by the character image digitizer and were designed to remove “salt and pepper” noise in the image of the character being read by the machine. By so doing the transforms utilized were in actuality executing two-dimensional, low-pass spatial filtering. Only in recent years, however, have researchers, such as Nakagawa and Rosenfeld (1978), Goetcherian (1980), and Preston (1982), performed the analysis required to fully explain the characteristics of such filters. It is the purpose of this chapter to both summarize and expand the analysis with emphasis on the effects of the array tessellation, the neighborhood configuration, and other important computational parameters.


Image and Vision Computing | 1994

Algorithm design for image processing in the context of cellular logic

Michael J. B. Duff; Terry J. Fountain

Abstract Two novel image processing algorithms, developed within the referential framework of cellular logic neighbourhood operations, are presented. The first utilizes grey-level morphological operations to provide varying degrees of image sharpening and blurring. This is achieved by mixing the original image, the expanded image and the shrunk image under the control of a single parameter which alters the proportions of original and modified images which are mixed. The second algorithm permits optimal segmentation of an image containing more than two distinct populations of pixels, by manipulation of the grey-level histogram of the image as an image in its own right. This is implemented by morphological smoothing of the histogram image, followed by detection of minima which are uniquely defined in terms of three neighbourhood operators. Results are presented which illustrate how a result visually close to the original can be retrieved from a deliberately blurred image using the first algorithm, whilst a series of results obtained using the second algorithm show that it works well for images in which the pixel populations are well-separated, but poorly if the populations are too small to survive the histogram smoothing step.


Archive | 1984

Cellular Logic Operations in N-Space

Kendall Preston; Michael J. B. Duff

The first studies in cellular logic in spaces having a dimensionality greater than two were undertaken by Ulam (1962) at the Los Alamos Scientific Laboratory in the early 1960’s. Much of this work was inspired by von Neumann (1951) who was studying cellular automata for the purpose of determining how computers could be made to reproduce themselves. Ulam and later Schrandt and Ulam (1960) concentrated on developing certain recursive relationships which, when operating upon a starting pattern or residue, would produce interesting patterns of growth (Chapter 12). As stated by Ulam (1962), “The objects found in this way seem to be, so to say, intermediate in complexity between inorganic patterns like those of crystals and the more varied intricacies of organic molecules and structures. In fact one of the aims of the present note is to show, by admittedly somewhat artificial examples, an enormous variety of objects which may be obtained by means of rather simple inductive definitions and to throw a sidelight on the question of how much ‘information’ is necessary to describe the seemingly enormously elaborate structures of living objects.”


Archive | 1984

Patterns of Growth

Kendall Preston; Michael J. B. Duff

No book on cellular automata would be complete without a chapter on patterns of growth, especially the most popular generator of these patterns, namely, John Horton Conway’s cellular automata game “Life” (see Gardner, 1971). Long before the invention of Conway’s Life, Moore (1968) at the United States Bureau of Standards and Ulam (1962) at the Los Alamos Scientific Laboratory were analyzing growth patterns using digital computers. Moore and Ulam used the digital computer to simulate the action of a cellular automaton consisting of an array of processing elements far simpler than the 29-state processing elements of von Neumann (1951). They wrote computer programs to simulate an array of two-state processing elements exhibiting either dl-connectedness (Ulam) or d2-connectedness (Moore). (See equations 6.1 and 6.2.)


Archive | 1984

Cellular Logic Machines

Kendall Preston; Michael J. B. Duff

Chapter 1 describes the early work of von Neumann (1951) on cellular automata. This work was not accompanied by reductions to practice in hardware as it was impractical at that time to build machines having the millions of devices required. Only with the development of high-density integrated circuitry in the 1970s has this feat now been accomplished (Chapter 11). Therefore, during the 1950s cellular automata were emulated using the general-purpose computers which were available at that time. The work of Moore (1966) and Kirsch (1957) at the National Bureau of Standards, of Ulam (1962) at the Atomic Energy Commission, and Unger (1959) at Bell Telephone Laboratories are outstanding examples of this work. All of these workers simplified von Neumann’s 29-state processing element and concentrated their efforts on studying arrays of 2-state (binary) processing elements. In the 1960s a new trend began with the construction of the first cellular logic machines by one of the authors (Preston, 1961). These and other special-purpose machines emulated the cellular automaton by using a single high-speed processing element to operate sequentially on an array of binary data. With the introduction of the diff3 in the 1970s (Graham, and Norgren, 1980) cellular logic machines were manufactured having several processing elements. Then Sternberg (1981) introduced a pipelined cellular logic machine, called the Cytocomputer, having approximately 100 processing elements. The Cytocomputer was also the first cellular logic machine to include numerical (multi-state) processing elements in addition to binary processing elements, thus making the transition from high-speed special-purpose machines limited to bilevel data to a system which could manipulate multi-state data. This chapter concentrates on the evolution and architecture of the cellular logic machines which have been built over the past two decades, all of which handle bilevel data arrays. Machines of this kind are currently in wide use both for commercial and research purposes in image processing. Despite their limitation to bilevel data, they are also useful in graylevel image processing due to their ability to convert graylevel images to binary images by multiple thresholding and, after performing logical operations at these thresholds, to convert results to graylevel output by arithmetic summation (Chapter 2).

Collaboration


Dive into the Michael J. B. Duff's collaboration.

Top Co-Authors

Avatar

Kendall Preston

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. M. Watson

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin D. Moffat

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. K. Shaw

University College London

View shared research outputs
Top Co-Authors

Avatar

K. N. Matthews

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge