Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where D.L. Bisset is active.

Publication


Featured researches published by D.L. Bisset.


Pattern Recognition Letters | 1995

Investigating feedforward neural networks with respect to the rejection of spurious patterns

G.C. Vasconcelos; Michael C. Fairhurst; D.L. Bisset

The reliability of feedforward neural networks with respect to the rejection of patterns not belonging to the defined training classes is investigated. It is shown how networks with different activation functions and propagation rules construct the decision regions in the pattern space and, therefore, affect the networks performance in dealing with spurious information. A modification to the standard MLP structure is described to enhance its reliability in this respect.


Pattern Recognition Letters | 1991

Adaptive pattern recognition using goal seeking neurons

Edson Costa de Barros Carvalho Filho; Michael C. Fairhurst; D.L. Bisset

This paper defines a novel Boolean neuron model which is particularly suited to practical pattern recognition tasks. It achieves its learning with a single pass of the training data and pattern classification capability emerges from the use of local low level goals.


international symposium on neural networks | 1994

Neural networks in the Clifford domain

J.K. Pearson; D.L. Bisset

Georgiou and Koutsougeras (1992) and Gordon et al. (1990) extended the traditional multi-layer perceptron to allow activation, threshold and weight values to take on complex values instead of real values. Although at first sight this might seem biologically unmotivated, if phase as well as frequency information in synaptic pulse trains plays a part in processing in the brain, then complex valued networks could be used to model phase information, in the same way as complex numbers are used to model phase in electrical engineering. Clifford algebras offer a higher dimensional generalization of the complex numbers. The present authors have shown that it is possible to derive a back error propagation algorithm for networks with Clifford valued weight and activation values. This work ceases to be biologically motivated, but it is hoped that by bringing together multidimensional signals into single elements of a Clifford algebra, that more compact representations of the pattern space will be obtained.<<ETX>>


Neural Networks | 1990

A Goal Seeking Neuron for Boolean Neural Networks

E. Filho; D.L. Bisset; Michael C. Fairhurst

This paper proposes a novel Boolean neural model known as the Goal Seeking Neuron (GSN). The operation of the neuron and its associated learning algorithm are discussed in detail. The GSN model has been generated in response to a number of observed weaknesses in the Probabilistic Logic Node (PLN) proposed by Kan[1]. The paper identifies these problems and shows how the goal-seeking nature of the GSN overcomes them. The GSN is designed to make efficient use of its memory space by compacting its internal representation, and allowing new patterns to be learned without corrupting existing memories. This is achieved without losing the potential for direct hardware implementation, or its local processing characteristics. By a simple modification to the learning mechanism it is also possible to achieve acceptable performance with only a single pass of the training data.


Neural Computing and Applications | 1995

Efficient detection of spurious inputs for improving the robustness of MLP networks in practical applications

G.C. Vasconcelos; Michael C. Fairhurst; D.L. Bisset

The problem of the rejection of patterns not belonging to identified training classes is investigated with respect to Multilayer Perceptron Networks (MLP). The reason for the inherent unreliability of the standard MLP in this respect is explained, and some mechanisms for the enhancement of its rejection performance are considered. Two network configurations are presented as candidates for a more reliable structure, and are compared to the so-called ‘negative training’ approach. The first configuration is an MLP which uses a Gaussian as its activation function, and the second is an MLP with direct connections from the input to the output layer of the network. The networks are examined and evaluated both through the technique of network inversion, and through practical experiments in a pattern classification application. Finally, the model of Radial Basis Function (RBF) networks is also considered in this respect, and its performance is compared to that obtained with the other networks described.


Pattern Recognition Letters | 1994

An integrated Boolean neural network for pattern classification

A. de Carvalho; Michael C. Fairhurst; D.L. Bisset

This paper describes an integrated approach to pattern classification where a self-organising Boolean neural network architecture is used as a front-end processor to a feedforward neural architecture based on goal-seeking principles (the GSN architecture). The performance of the integrated architecture is illustrated by considering its application to a character recognition problem.


international symposium on neural networks | 1995

PCN: the probabilistic convergent network

Gareth Howells; Michael C. Fairhurst; D.L. Bisset

A new architecture for networks constructed from RAM-based neurons is presented which, whilst retaining learning and generalisation properties possessed by existing RAM-based network architectures, allows for a regular treatment of specialisation and generalisation with the additional property of providing information regarding the relative probability of a given sample pattern being a member of each possible pattern class. The network architecture provides the basis for the development of a pattern recognition system capable of application in a practical environment.


Archive | 1991

An Analogue Neuron Suitable for a Data Frame Architecture

W.A.J. Waller; D.L. Bisset; P.M. Daniell

This paper describes the VLSI realisation of a novel neural network implementation architecture which is geared to the processing of frame based data. The chief advantage of this architecture is its elimination of the need to implement total connectivity between neural units as hard-wired connections. This is achieved without sacrificing performance or functionality. A detailed description of the implementation of this architecture in 2μ CMOS, using a mixed analogue and digital building blocks is given together with details of system level design.


Connection Science | 1997

Combining Boolean neural architectures for image recognition

A. De Carvalho; Michael C. Fairhurst; D.L. Bisset

This paper presents a completely integrated Boolean neural architecture, where a self-organizing Boolean neural network (SOFT) is used as a front-end processor to a feedforward Boolean network based on goal-seeking principles (GSN f ). This paper will evaluate the advantages of the integrated SOFT-GSN f over GSN f by showing its increased effectiveness in an optical character recognition task.


Pattern Recognition Letters | 1995

BCN: a novel network architecture for RAM-based neurons

Gareth Howells; Michael C. Fairhurst; D.L. Bisset

A new architecture for networks of RAM-based Boolean neurons is presented which, whilst retaining learning and generalisation properties possessed by existing network architectures, allows for arbitrary target patterns for pattern classes with strong convergence properties. The network architecture provides the basis for a pattern recognition system capable of application in a practical environment.

Collaboration


Dive into the D.L. Bisset's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. de Carvalho

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge