B.R. Wilkins
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by B.R. Wilkins.
Analog Integrated Circuits and Signal Processing | 1998
A.J. Perkins; Mark Zwolinski; C.D. Chalk; B.R. Wilkins
Fault simulation is an accepted part of the test generation procedure for digital circuits. With complex analog and mixed-signal integrated circuits, such techniques must now be extended. Analog simulation is slow and fault simulation can be prohibitively expensive because of the large number of potential faults. We describe how the number of faults to be simulated in an analog circuit can be reduced by fault collapsing, and how the simulation time can be reduced by behavioral modeling of fault-free and faulty circuit blocks. These behavioral models can be implemented in SPICE or in VHDL-AMS and we discuss the merits of each approach. VHDL-AMS does potentially offer advantages in tackling this problem, but there are a number of computational difficulties to be overcome.
Pattern Recognition | 1972
David W. Thomas; B.R. Wilkins
Abstract The problem of classifying vehicles on the basis of the acoustic waveform obtained from them has been approached by calculating various central moments of the short term power spectrum of a sample of the signal. It has been found that classification can be performed using two moment measurements, giving good results with vehicles operating under steady running conditions. With a burst of acceleration included in the sample, however, discrimination becomes much more difficult. In the moment space under consideration, it became evident that the movement of sample points with changes of engine speed was itself characteristic of the vehicle class, and this consideration amongst others suggested that the engine speed (or, in practice, the firing rate of the engine) was an important parameter that needs to be determined automatically. The first attempt at finding the fundamental frequency of the waveform was based on autocorrelation, but this gave very unsatisfactory results. The technique of “cepstrum” analysis, however, is shown to give a reliable indication of the firing rate even when the engine sound is deeply embedded in noise. This is in contrast to results obtained by some earlier workers using this analysis in speech studies.
european design and test conference | 1996
Mark Zwolinski; C.D. Chalk; B.R. Wilkins
Fault simulation of analogue circuits is a very CPU intensive task. This paper describes a technique to increase the speed of fault simulation. The effects of bridging faults within operational amplifiers have been classified according to the externally observable behaviour reducing the number of fault simulations by two thirds. Parameterisable macromodels have been written in which both fault-free specifications and faulty effects can be modelled. The supply current is also modelled. These macromodels have been verified by embedding within a larger circuit, and have been shown to accurately model fault-free and faulty behaviour, and to propagate faulty effects correctly. The macromodels simulate about 7.5 times faster than the full transistor model.
Information Sciences | 1970
Norman Leslie Ford; B. G. Batchelor; B.R. Wilkins
Abstract The Nearest Neighbour Classifier accepts patterns in the form of points in a descriptor space and classifies them by reference to a number of stored labelled points called locates. Training procedures for such classifiers have in the past consisted of a set of rules for selecting various members of the training set for use as locates. This paper describes an alternative approach in which the locates do not necessarily represent real patterns, and the training procedure consists of adjusting the positions of the locates so as to optimise the decision surface.
Journal of Electronic Testing | 1993
B.R. Wilkins; B. S. Suparjo
The problems of testing mixed-signal circuits are becoming increasingly intractable as circuit density and functionality increase. The most urgent of these problems is seen as the ability to perform interconnect testing without the need for physical probing. A structured approach, in which test circuitry is incorporated into the chip in order to provide access for board test, was proposed in an earlier paper: an improved version of this structure, which is compatible with the established digital Standard 1149.1, is described and evaluated using SPICE simulation.
embedded and real-time computing systems and applications | 1997
T. M. Chen; B.R. Wilkins
A set of new efficient and compact formulae for buffer size analysis of real time systems using M/G/1 queueing model has been developed. For Poisson random arrival and general service time distribution of a single server system, or an M/G/1 system, and for a certain probability of overflow as the confidence level, the needed size of buffer can be estimated. Two subsets of M/G/1 systems, namely the M/D/1 and M/E/sub k//1 systems, are investigated in detail to illustrate the practicality of this approach. The formulae are derived analytically and are validated using term by term evaluation. The size of buffer needed for M/D/1 and M/E/sub k//1 systems are tabulated for design and validation purposes. The newly derived formulae are more efficient and compact than currently known computation methods.
Archive | 1999
B.R. Wilkins
Figure 1.1 represents an electrical circuit constructed as a printed circuit assembly (PCA) consisting of a substrate carrying a pattern of conductors (the interconnect) on which separately manufactured components are mounted so that the component pins make electrical contact with the interconnect. In normal functional operation, the PCA connects to other parts of the system by way of a set of contacts such as the edge-connector shown in Figure 1.1.
international conference on computer design | 1993
B.R. Wilkins; C. Shi
A major problem in the manufacture of electronic systems lies in the conflict between the demands of the designer for optimum performance and the requirements of the production engineer for adequate testability. Design for testability guidelines have been well established for many years, but are still today being urged on an apparently reluctant industry. The purpose of the guidelines is to ensure that the manufactured product is economically and efficiently testable; the main management tool for this purpose is the design and test review process, but in practice this does not seem to have been successful in preventing designs with poor testability from reaching production. The paper considers possible reasons for nonconformance with testability guidelines, and suggests changes in procedure that will allow management to exert more effective control on the process.<<ETX>>
international conference on computer design | 1988
B.R. Wilkins
A novel equipment practice, known as hierarchical interconnection technology (HIT), is being presented to British and international standards bodies. Its aim is to alleviate some of the problems of surface mounting technology by partitioning printed circuit boards into subunits. The diagnostic testing requirement is thus simplified, since a fault need only be diagnosed to the replaceable subunit (typically one of eight mounted on a double Eurocard). A functional test can deliver this degree of fault localization with only modest demands on design for testability. Because of solderless assembly, defective subunits are readily replaced without the need for skilled labor or specialist rework facilities; and because each subunit has relatively low value, it can be discarded without further diagnosis. The author describes the HIT system and its background, and how it can affect circuit design, manufacture, and field servicing.<<ETX>>
Microprocessors and Microsystems | 1988
B.R. Wilkins
Abstract The complexity achievable within a custom chip or on a PCB loaded with standard combinational or sequential elements, even without the use of VLSI components such as microprocessors, requires the use of automatic methods for the generation of test patterns if the task is to be completed within an acceptable time and at an acceptable cost. This paper reviews the current status of some aspects of the test process as applied to such circuits, and of the principles of structured design methodologies intended to reduce the difficulties of test pattern generation (TPG). The paper starts by reviewing the fault models on which most automatic TPG (ATPG) methods are based, and goes on to discuss some of the available ATPG methods themselves. The problems involved in TPG for sequential circuits are briefly discussed to show the motivation behind structured design for testability using the scan-in scan-out (SISO) principle. The main implications of SISO are described, as are some of the applications of these principles to the construction of testable PCBs.