Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin Fyfe is active.

Publication


Featured researches published by Colin Fyfe.


International Journal of Neural Systems | 2000

KERNEL AND NONLINEAR CANONICAL CORRELATION ANALYSIS

Pei Ling Lai; Colin Fyfe

We review a neural implementation of the statistical technique of Canonical Correlation Analysis (CCA) and extend it to nonlinear CCA. We then derive the method of kernel-based CCA and compare these two methods on real and artificial data sets before using both on the Blind Separation of Sources.


Archive | 2006

Intelligent Data Engineering and Automated Learning – IDEAL 2006

Emilio Corchado; Hujun Yin; Vicente J. Botti; Colin Fyfe

Learning and Information Processing -- Data Mining, Retrieval and Management -- Bioinformatics and Bio-inspired Models -- Agents and Hybrid Systems -- Financial Engineering -- Special Session on Nature-Inspired Date Technologies.


Data Mining and Knowledge Discovery | 2004

Maximum and Minimum Likelihood Hebbian Learning for Exploratory Projection Pursuit

Emilio Corchado; Donald MacDonald; Colin Fyfe

In this paper, we review an extension of the learning rules in a Principal Component Analysis network which has been derived to be optimal for a specific probability density function. We note that this probability density function is one of a family of pdfs and investigate the learning rules formed in order to be optimal for several members of this family. We show that, whereas we have previously (Lai et al., 2000; Fyfe and MacDonald, 2002) viewed the single member of the family as an extension of PCA, it is more appropriate to view the whole family of learning rules as methods of performing Exploratory Projection Pursuit. We illustrate this on both artificial and real data sets.


international conference on knowledge based and intelligent information and engineering systems | 2000

The kernel self-organising map

Donald MacDonald; Colin Fyfe

We review a recently-developed method of performing k-means clustering in a high-dimensional feature space and extend it to give the resultant mapping topology-preserving properties. We show the results of the new algorithm on the standard data set, on random numbers drawn uniformly from [0,1)/sup 2/ and on the Olivetti database of faces. The new algorithm converges extremely quickly.


Neural Networks | 1999

A neural implementation of canonical correlation analysis

Pei Ling Lai; Colin Fyfe

We derive a new method of performing Canonical Correlation Analysis with Artificial Neural Networks. We demonstrate the networks capabilities on artificial data and then compare its effectiveness with that of a standard statistical method on real data. We demonstrate the capabilities of the network in two situations where standard statistical techniques are not effective: where we have correlations stretching over three data sets and where the maximum nonlinear correlation is greater than any linear correlation. The network is also applied to Beckers (Network: Computation in Neural Systems, 1996, 7:7-31) random dot stereogram data and shown to be extremely effective at detecting shift information.


International Journal of Neural Systems | 2008

ONLINE CLUSTERING ALGORITHMS

Wesam Barbakh; Colin Fyfe

We introduce a set of clustering algorithms whose performance function is such that the algorithms overcome one of the weaknesses of K-means, its sensitivity to initial conditions which leads it to converge to a local optimum rather than the global optimum. We derive online learning algorithms and illustrate their convergence to optimal solutions which K-means fails to find. We then extend the algorithm by underpinning it with a latent space which enables a topology preserving mapping to be found. We show visualisation results on some standard data sets.


Journal of Experimental and Theoretical Artificial Intelligence | 2003

Structuring global responses of local filters using lateral connections

Emilio Corchado; Ying Han; Colin Fyfe

This paper reviews an unsupervised artificial neural network that has been shown to perform principal component analysis and a constrained version of the same network that has been shown to perform factor analysis. It is shown that this network, when trained on real video data, finds filters that are both local in time and local in space. It is further shown that the type of movement and environment in these video sequences determines the shape of the filters found. It is then shown that lateral connections derived from finding the mode of a probability density function can be used to form a global ordering of the output responses of the network and that different parameter regimes can be used to differentiate between competing, mutually interfering classes of factors. Indeed, some parameter values can be used to suppress entirely particular classes of factors. The net effect on video data is that specific types of movement can be identified by examining the networks outputs’ responses.


Network: Computation In Neural Systems | 1998

Modelling multiple-cause structure using rectification constraints.

Darryl Charles; Colin Fyfe

We present an artificial neural network which self-organizes in an unsupervised manner to form a sparse distributed representation of the underlying causes in data sets. This coding is achieved by introducing several rectification constraints to a PCA network, based on our prior beliefs about the data. Through experimentation we investigate the relative performance of these rectifications on the weights and/or outputs of the network. We find that use of an exponential function on the output to the network is most reliable in discovering all the causes in data sets even when the input data are strongly corrupted by random noise. Preprocessing our inputs to achieve unit variance on each is very effective in helping us to discover all underlying causes when the power on each cause is variable. Our resulting network methodologies are straightforward yet extremely robust over many trials.


Knowledge Based Systems | 2012

Class imbalance methods for translation initiation site recognition in DNA sequences

Nicolás García-Pedrajas; Javier Pérez-Rodríguez; María D. García-Pedrajas; Domingo Ortiz-Boyer; Colin Fyfe

Translation initiation site (TIS) recognition is one of the first steps in gene structure prediction, and one of the common components in any gene recognition system. Many methods have been described in the literature to identify TIS in transcribed sequences such as mRNA, EST and cDNA sequences. However, the recognition of TIS in DNA sequences is a far more challenging task, and the methods described so far for transcripts achieve poor results in DNA sequences. Most methods approach this problem taking into account its biological characteristics. In this work we try a different view, considering this classification problem from a purely machine learning perspective. From the point of view of machine learning, TIS recognition is a class imbalance problem. Thus, in this paper we approach TIS recognition from this angle, and apply the different methods that have been developed to deal with imbalanced datasets. The proposed approach has two advantages. Firstly, it improves the results using standard classification methods. Secondly, it broadens the set of classification algorithms that can be used, as some of the class-imbalance methods, such as undersampling, are also useful as methods for scaling up data mining algorithms as they reduce the size of the dataset. In this way, classifiers that cannot be applied to the whole dataset, due to long training time or large memory requirements, can be used when undersampling method is applied. Results show an advantage of class imbalance methods with respect to the same methods applied without considering the class imbalance nature of the problem. The applied methods are also able to improve the results obtained with the best method in the literature, which is based on looking for the next in-frame stop codon from the putative TIS that must be predicted.


Neural Processing Letters | 1997

A Neural Network for PCA and Beyond

Colin Fyfe

Principal Component Analysis (PCA) has been implemented by several neural methods. We discuss a Network which has previously been shown to find the Principal Component subspace though not the actual Principal Components themselves. By introducing a constraint to the learning rule (we do not allow the weights to become negative) we cause the same network to find the actual Principal Components. We then use the network to identify individual independent sources when the signals from such sources are ORed together.

Collaboration


Dive into the Colin Fyfe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinsong Leng

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Hujun Yin

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Pei Ling Lai

University of the West of Scotland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Thatcher

University of South Australia

View shared research outputs
Researchain Logo
Decentralizing Knowledge